Redirecting HTTP to HTTPS with the IIS URL Rewrite Module

by Ryan 1. February 2014 15:02

I should use TLS

Hi folks. I've been working on my HTTP web server the past couple of weeks. It's pretty good so far. It uses the managed thread pool, does asynchronous request handling, tokenization (like %CURRENTDATE%, etc.) server-side includes, SSL/TLS support, can run multiple websites simultaneously, etc. It seems really fast, but I haven't put it through any serious load tests yet.  My original goal was to use it to completely replace this blog and content management system (IIS and BlogEngine.NET,) but now I'm starting to think that won't happen. Mainly because I would need to convert years worth of content to a new format, and I would want to preserve the URLs so that if anyone on the internet had ever linked to any of my blog posts, they wouldn't all turn to 404s when I made the switch. That's actually more daunting to me than writing a server and a CMS.



So I'm taking a break from that and instead improving what I've already got here. One thing I've always wanted to do but never got around to was redirecting HTTP to HTTPS. We should be using SSL/TLS to encrypt our web traffic whenever possible. Especially when it's this simple:

First, open IIS Manager, and use Web Platform Installer to install the URL Rewrite 2.0 module:

Install URL Rewrite


Now at the server level, open URL Rewrite and add a new blank rule:

Add Blank Rule


Next you want to match (.*) regex pattern:

Edit the rule


And finally add the following condition:


Condition


And that's all there is to it!  This is of course assuming that you've got an SSL certificate bound to the website already, and that you're listening on both ports 80 and 443.

Note that you can also do this exact same thing by editing your web.config file by hand:


    
<system.WebServer>
   ...
   <rewrite>
      <globalrules>
        <rule name="Redirect to HTTPS" stopprocessing="true" enabled="true">
          <match url="(.*)"></match>
          <action type="Redirect" url="https://{HTTP_HOST}/{R:1}"></action>
          <conditions>
            <add pattern="^OFF$" input="{HTTPS}"></add>
          </conditions>
        </rule>
      </globalrules>
    </rewrite>
</system.WebServer>

And finally, a tip: Don't put resources such as images in your website that are sourced from HTTP, regardless if they're from your own server or someone else's. Use relative paths (or at least the https prefix) instead. Web browsers typically complain or even refuse to load "insecure" content on a secure page.

Tired of the NSA Seeing What Model of Plug and Play Mouse You're Using?

by Ryan 14. January 2014 13:40

Not long ago, the story broke that the NSA was capturing internet traffic generated by Windows crash dumps, driver downloads from Windows Updates, Windows Error Reporting, etc.

As per Microsoft's policy, this information, when it contains sensitive or personally identifiable data, is encrypted.

Encryption: All report data that could include personally identifiable information is encrypted (HTTPS) during transmission. The software "parameters" information, which includes such information as the application name and version, module name and version, and exception code, is not encrypted.

While I'm not saying that SSL/TLS poses an impenetrable obstacle for the likes of the NSA, I am saying that Microsoft is not just sending full memory dumps across the internet in clear text every time something crashes on your machine.  But if you were to, for instance, plug in a new Logitech USB mouse, your computer very well could try to download drivers for it from Windows Update automatically, and when that happens, it sends a few details about your PC and the device you just plugged in, in clear text.

Here is where you can read more about that.

So let's say you're an enterprise administrator, and you want to put an end to all this nonsense for all the computers in your organization, such that your computers no longer attempt to contact Microsoft or send data to them when an application crashes or someone installs a new device.  Aside from setting up your own internal corporate Windows Error Reporting server, (who does that?) you can disable the behavior via Group Policy. There are a surprising number of policy settings that should be disabled so that you're not leaking data all over the web:

  • The system will be configured to prevent automatic forwarding of error information.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Internet Communication Management -> Internet Communication settings-> “Turn off Windows Error Reporting” to “Enabled”.

  • An Error Report will not be sent when a generic device driver is installed.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Device Installation -> "Do not send a Windows error report when a generic driver is installed on a device" to "Enabled".

  • Additional data requests in response to Error Reporting will be declined.

Configure the policy value for Computer Configuration -> Administrative Templates -> Windows Components -> Windows Error Reporting -> "Do not send additional data" to "Enabled".

  • Errors in handwriting recognition on Tablet PCs will not be reported to Microsoft.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Internet Communication Management -> Internet Communications settings “Turn off handwriting recognition error reporting” to “Enabled”.

  • Windows Error Reporting to Microsoft will be disabled.

Configure the policy value for Computer Configuration -> Administrative Templates -> Windows Components -> Windows Error Reporting “Disable Windows Error Reporting” to “Enabled”.

  • Windows will be prevented from sending an error report when a device driver requests additional software during installation.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Device Installation -> “Prevent Windows from sending an error report when a device driver requests additional software during installation” to “Enabled”.

  • Microsoft Support Diagnostic Tool (MSDT) interactive communication with Microsoft will be prevented.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Troubleshooting and Diagnostics -> Microsoft Support Diagnostic Tool -> “Microsoft Support Diagnostic Tool: Turn on MSDT interactive communication with Support Provider” to “Disabled”.

  • Access to Windows Online Troubleshooting Service (WOTS) will be prevented.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Troubleshooting and Diagnostics -> Scripted Diagnostics -> “Troubleshooting: Allow users to access online troubleshooting content on Microsoft servers from the Troubleshooting Control Panel (via Windows Online Troubleshooting Service - WOTS)” to “Disabled”.

  • Responsiveness events will be prevented from being aggregated and sent to Microsoft.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Troubleshooting and Diagnostics -> Windows Performance PerfTrack -> “Enable/Disable PerfTrack” to “Disabled”.

  • The Application Compatibility Program Inventory will be prevented from collecting data and sending the information to Microsoft.

Configure the policy value for Computer Configuration -> Administrative Templates -> Windows Components -> Application Compatibility -> “Turn off Program Inventory” to “Enabled”.

  • Device driver searches using Windows Update will be prevented.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Device Installation -> “Specify Search Order for device driver source locations” to “Enabled: Do not search Windows Update”.

  • Device metadata retrieval from the Internet will be prevented.

Configure the policy value for Computer Configuration -> Administrative Templates -> System -> Device Installation -> “Prevent device metadata retrieval from internet” to “Enabled”.

  • Windows Update will be prevented from searching for point and print drivers.

Configure the policy value for Computer Configuration -> Administrative Templates -> Printers -> “Extend Point and Print connection to search Windows Update” to “Disabled”.

Let's Deploy EMET 4.1!

by Ryan 26. December 2013 17:22

Howdy!

Let's talk about EMET.  Enhanced Mitigation Experience Toolkit.  Version 4.1 is the latest update, out just last month. It's free. You can find it here. When you download it, make sure you also download the user guide PDF that comes with it, as it's actually pretty good quality documentation for a free tool.

The thing about EMET, is that it is not antivirus. It's not signature-based, the way that traditional AV is. EMET is behavior based.  It monitors the system in real time and watches all running processes for signs of malicious behavior and attempts to prevent them.  It also applies a set of overall system-wide hardening policies that make many types of exploits more difficult or impossible to pull off. The upshot of this approach is that EMET can theoretically thwart 0-days and other malware/exploits that antivirus is oblivious to.  It also allows us to protect legacy applications that may not have been originally written with today's security features in mind.

The world of computer security is all about measure and countermeasure. The attackers come up with a new attack, then the defenders devise a defense against it, then the attackers come up with a way to get around that defense, ad nauseum, forever. But anything you can do to raise the bar for the attackers - to make their jobs harder - should be done.

Here's what the EMET application and system tray icon look like once it's installed:

EMET

From that screenshot, you get an idea of some of the malicious behavior that EMET is trying to guard against.  You can turn on mandatory DEP for all processes, even ones that weren't compiled for it. Data Execution Prevention has been around for a long time, and is basically a mechanism to prevent the execution of code that resides in areas of memory marked as non-executable. (I.e. where only data, not code, should be.)  With DEP on, heaps and stacks will be marked as non-executable and attempts to run code from those areas in memory will fail. Most CPUs these days have the technology baked right into the hardware, so there's no subverting it. (Knock on wood.)

You can turn on mandatory ASLR for all processes on the system, again, even for processes that were not compiled with it.  Address Space Layout Randomization is a technique whereby a process loads modules into "random" memory addresses, whereas in the days before ASLR processes always loaded modules into the same, predictable memory locations. Imagine what an advantage it would be for an attacker to always know exactly where to find modules loaded in any process on any machine.

Then you have your heapspray mitigation. A "heap spray" is an attack technique where the attacker places copies of malicious code in as many locations within the heap as possible, increasing the odds of success that it will be executed once the instruction pointer is manipulated. This is a technique that attackers came up with to aid them against ASLR, since they could no longer rely on predictable memory addresses. By allocating some commonly-used memory pages within processes ahead of time, we can keep the heap sprayer's odds of success low.

Those are only a few of the mitigations that EMET is capable of.  Read that user guide that I mentioned before for much more info.

Oh, and one last thing: Is it possible that EMET could cause certain applications to malfunction? Absolutely! So always test thoroughly before deploying to production. And just like with enterprise-grade antivirus software, EMET also requires a good bit of configuring until you come up with a good policy that suits your environment and gives you the best mix of protection versus application compatibility.

Let's get into how EMET can be deployed across an enterprise and configured via Group Policy. Once you've installed it on one computer, you will notice a Deployment folder in with the program files. In the Deployment folder you will find the Group Policy template files you need to configure EMET across your enterprise via GPO.  First, create your Group Policy Central Store if you haven't already:

Creating Central Store

Copy the EMET.ADMX file into the PolicyDefinitions folder, and EMET.ADML file into the EN-US subfolder.  If all goes well, you will notice a new Administrative Template now when you go to create a new GPO:

EMET GPO

Now you may notice that while I do have the EMET administrative template... all my other admin templates have disappeared! That's because I forgot to copy all of the other admin templates from one of the domain controllers into Sysvol before I took the screen shot. So don't forget to copy over all the other *.admx and *.adml files from one of your DCs before continuing.

Now you can control how EMET is configured in a centralized, consistent, enforceable way on all the computers in your organization.

The next part is deploying the software. The EMET user guide describes using System Center Configuration Manager to deploy the software, and while I agree that SCCM is boss when it comes to deploying software, I don't have it installed here in my lab, so I'm going to just do it via GPO as well.  In fact, I'll do it in the same EMET GPO that defines the application settings too.

Copy the installer to a network share that can be accessed by all the domain members that you intend to deploy the software to:

Copy the MSI

Then create a new GPO with a software package to deploy that MSI from the network location. Make sure the software is assigned to the computer, not the user.  And lastly, you'll likely rip less of your hair out if you turn off asynchronous policy processing like so:

Computer Settings
 + Administrative Templates
    + System
       + Logon
          + Always wait for the network at computer startup and logon: Enabled

And software deployment across an entire organization, that simple. Luckily I didn't even have to apply a transform to that MSI, which is good, because that is something I didn't feel like doing this evening.

Until next time, stay safe, and if you still want to hear more about EMET, watch this cool talk from Neil Sikka about it from Defcon 21!

Finding Hidden Processes with Volatility and Scanning with Sysinternals Sigcheck

by Ryan 19. December 2013 16:15

Happy Memory-Forensics-Thursday!

I get called in to go malware hunting every once in a while.  It's usually after an automatic vulnerability scanner has found something unusual about a particular computer on the network and threw up a red flag.  Once a machine is suspected of being infected, someone needs to go in and validate whether what the vulnerability scanner found is truly a compromise or a false positive, the nature of the infection, and clean it if possible.  I know that the "safest" reaction to the slightest whiff of malware is to immediately disconnect the machine from the network, format it and reinstall the operating system, but in a busy production environment, that extreme approach isn't always feasible or necessary.

We all know that no antivirus product can catch everything, nor is any vulnerability scanner perfect.  But a human with a bit of skill and the right tools can quickly sniff out things that AV has missed.  Malware hunting and forensic analysis really puts one's knowledge of deep Windows internals to the test, possibly more so than anything else, so I find it extremely fun and rewarding.

So today we're going to talk about two tools that will aid you in your journey.  Volatility and Sigcheck.

Volatility is a wondrous framework for analyzing Windows memory dumps.  You can find it here. It's free and open-source.  It's written in Python, but there is also a compiled exe version if you don't have Python installed.  Volatility is a framework that can run any number of plugins, and these plugins perform data analyses on memory dumps, focused on pointing out specific indicators of compromise, such as API hooks, hidden processes, hooked driver IRP functions, interrupt descriptor table hooks, and so much more.  It's not magic though, and it doesn't do much that you could not also do manually (and with much more painstaking effort) with WinDbg, but it does make it a hell of a lot faster and easier.  (We have to wait until 2014 for Win8/8.1 and Server 2012/2012R2 support.)

But first, before you can use Volatility, you must have a memory dump.  (There is a technology preview branch of Volatility that can read directly from the PhysicalMemory device object.)  There are many tools that can dump memory, such as WinPMem, which you can also find on the Volatility downloads page that I linked to earlier.  It can dump in both RAW format and DMP (Windows crash dump) formats.  Make sure that you download a version with signed drivers, as WinPmem loads a driver to do its business, and modern versions of Windows really don't like you trying to install unsigned drivers.  You can also use LiveKd to dump memory using the command .dump -f C:\memory.dmp.

Since Volatility is such a huge and versatile tool, today I'm only going to talk about one little piece of it - finding "hidden" processes.

When a process is created, the Windows kernel assigns it an _EPROCESS data structure.  Each _EPROCESS structure in turn contains a _LIST_ENTRY structure.  That _LIST_ENTRY structure contains a forward link and a backward link, each pointing to the next _EPROCESS structure on either side of it, creating a doubly-linked list that makes a full circle.  So if I wanted to know all of the processes running on the system, I could start with any process and walk through the _EPROCESS list until I got back to where I started.  When I use Task Manager, tasklist.exe or Process Explorer, they all use API functions that in turn rely on this fundamental mechanism.  Behold my awesome Paint.NET skills:

EPROCESS Good

So if we wanted to hide a process from view, all we have to do is overwrite the backward link of the process in front of us and the forward link of the process behind us to point around us.  That will effectively "unlink" our process of doom, causing it to be hidden:

EPROCESS BAD

This is what we call DKOM - Direct Kernel Object Manipulation.  A lot of rootkits and trojans use this technique.  And even though modern versions of Windows do not allow user mode access to the \\Device\PhysicalMemory object, which is where the _EPROCESS objects will always be because they're in a non-paged pool, we don't need it, nor do we need to load a kernel mode driver, because we can pull off a DKOM attack entirely from user mode by using the ZwSystemDebugControl API.  But we can root out the rootkits with Volatility.  With the command

C:\> volatility.exe --profile=Win7SP0x86 -f Memory.raw psscan

That command shows a list of running processes, but it does it not by walking the _EPROCESS linked list, but by scanning for pool tags and constrained data items (CDIs) that correspond to processes.  The idea is that you compare that list with a list of processes that you got via traditional means, and processes that show up as alive and well on Volatility's psscan list but not Task Manager's list are hidden processes probably up to no good.

There are other methods of finding hidden processes.  For instance, scanning for DISPATCHER_HEADER objects instead of looking at pool tags.  Even easier, a handle to the hidden process should still exist in the handle table of csrss.exe (Client/Server Runtime Subsystem) even after it's been unlinked from the _EPROCESS list, so don't forget to look there.  (There's a csrss_pslist plugin for Volatility as well.)  Also, use the thrdscan plugin to check for threads that belong to processes that don't appear to exist, which would be another sign of tomfoolery.

Alright, so now you've located an executable file that you suspect is malware, but you're not sure.  Scan that sucker with Sigcheck!  Mark Russinovich recently added VirusTotal integration into Sigcheck, with the ability to automatically upload unsigned binaries and have them scanned by 40+ antivirus engines and give you back reports on whether the file appears to be malicious!  Sigcheck can automatically scan through an entire directory structure, just looking for suspicious binaries, uploading them to VirusTotal, and showing you the results.

Remember that you must accept VirusTotal's terms and conditions before using the service.

Uploading suspicious files to VirusTotal is practically a civic responsibility, as the more malicious signatures that VirusTotal has on file, the more effective the antivirus service is for the whole world.

More Windows and AD Cryptography Mumbo-Jumbo

by Ryan 6. November 2013 09:41

I've still had my head pretty deep into cryptography and hashing as far as Windows and Active Directory is concerned, and I figured it was worth putting here in case you're interested.  We're going to talk about things like NTLM and how Windows stores hashes, and more.

The term NTLM is a loaded one, as the acronym is often used to refer to several different things.  It not only refers to Microsoft’s implementation of another standard algorithm for creating hashes, but it also refers to a network protocol.  The NTLM used for storing password hashes on disk (aka NT hash) is a totally different thing than the NTLM used to transmit authentication data across a TCP/IP network.  There’s the original LAN Manager protocol, which is worse than NT LAN Manager (NTLM or NTLMv1,) which is worse than NTLMv2, which is worse than NTLMv2 with Session Security and so on…  but an NT hash is an NT hash is an NT hash.  When we refer to either NTLMv1 or NTLMv2 specifically, we’re not talking about how the password gets stored on disk, we’re talking about network protocols.

Also, this information refers to Vista/2008+ era stuff.  I don’t want to delve into the ancient history Windows NT 4 or 2000, so let’s not even discuss LAN Manager/LM.  LM hashes are never stored or transmitted, ever, in an environment that consists of Vista/2008+ stuff.  It’s extinct.

Unless some bonehead admin purposely turned it back on.  In which case, fire him/her.

Moving on…

LOCAL MACHINE STUFF

So here we talk about what goes on in a regular Windows machine with no network connection.  No domain.  All local stuff. No network.

"SO REALLY, WTF IS AN NT HASH!?"

You might ask.  An NT hash is simply the MD4 hash of the little endian UTF-16 encoded plaintext input.  So really it’s MD4.

"So the Windows Security Accounts Manager (SAM) stores MD4 hashes of passwords to disk, then?"

Well no, not directly.  The NT hashes, before being stored, are encrypted with RC4 using the machine’s "boot key," which is both hard to get at as well as being unique to each OS install.  By "hard to get at," I mean that the boot key is scattered across several areas of the registry that require system level access and you have to know how to read data out of the registry that cannot be seen in Regedit, even if you run it as Local System. And it must be de-obfuscated on top of that.  After hashing the boot key and other bits of data with MD5, you then create another RC4 key from the MD5 hash of the hashed bootkey, plus some more information such as the specific user's security identifier.

So the final answer is that Windows stores local SAM passwords to disk in the registry as RC4-encrypted MD4 hashes using a key that is unique to every machine and is difficult to extract and descramble, unless you happen to be using one of the dozen or so tools that people have written to automate the process.

Active Directory is a different matter altogether.  Active Directory does not store domain user passwords in a local SAM database the same way that a standalone Windows machine stores local user passwords.  Active Directory stores those password hashes in a file on disk named NTDS.dit.  The only password hash that should be stored in the local SAM of a domain controller is the Directory Services Restore Mode password.  The algorithms used to save passwords in NTDS.dit are much different than the algorithms used by standalone Windows machines to store local passwords.  Before I tell you what those algorithms are, I want to mention that the algorithms AD uses to store domain password hashes on disk should not be in scope to auditors, because NTDS.dit is not accessible by unauthorized users.  The operating system maintains an exclusive lock on it and you cannot access it as long as the operating system is running.  Because of that,  the online Directory Services Engine and NTDS.dit together should be treated as one self-contained ‘cryptographic module’ and as such falls under the FIPS 140-2 clause:

"Cryptographic keys stored within a cryptographic module shall be stored either in plaintext form or encrypted form.  Plaintext secret and private keys shall not be accessible from outside the cryptographic module to unauthorized operators…"

So even plaintext secrets are acceptable to FIPS 140, as long as they stay within the cryptographic module and cannot be accessed by or sent to outsiders.

Active Directory stores not only the hashed password of domain users, but also their password history.  This is useful for that “Remember the last 24 passwords” Group Policy setting.  So there are encrypted NT hashes stored in NTDS.dit.  Let’s just assume we have an offline NTDS.dit – again, this should not be of any concern to auditors – this is Microsoft proprietary information and was obtained through reverse engineering.  It’s only internal to AD.  FIPS should not be concerned with this because this all takes place "within the cryptographic module."  Access to offline copies of NTDS.dit should be governed by how you protect your backups.

To decrypt a hash in NTDS.dit, first you need to decrypt the Password Encryption Key (PEK) which is itself encrypted and stored in NTDS.dit.  The PEK is the same across all domain controllers, but it is encrypted using the boot key (yes the same one discussed earlier) which is unique on every domain controller.  So once you have recovered the bootkey of a domain controller (which probably means you have already completely owned that domain controller and thus the entire domain so I'm not sure why you'd even be doing this) you can decrypt the PEK contained inside of an offline copy of NTDS.dit that came from that same domain controller.  To do that, you hash the bootkey 1000 times with MD5 and then use that result as the key to the RC4 cipher.  The only point to a thousand rounds of hashing is to make a brute force attack more time consuming.

OK, so now you’ve decrypted the PEK.  So use that decrypted PEK, plus 16 bytes of the encrypted hash itself as key material for another round of RC4.  Finally, use the SID of the user whose hash you are trying to decrypt as the key to a final round of DES to uncover, at last, the NT (MD4) hash for that user.

Now you need to brute-force attack that hash.  Using the program ighashgpu.exe, which uses CUDA to enable all 1344 processing cores on my GeForce GTX 670 graphics card to make brute force attempts on one hash in parallel, I can perform over 4 billion attempts per second to eventually arrive at the original plaintext password of the user.  It doesn’t take long to crack an NT hash any more.

As a side-note, so-called "cached credentials" are actually nothing more than password verifiers.  They’re essentially a hash of a hash, and there is no reversible information contained in a "cached credential" or any information that is of any interest to an attacker.  "Cached credentials" pose no security concern, yet most security firms, out of ignorance, still insist that they be disabled.

So there you have it.  You might notice that nowhere in Windows local password storage or Active Directory password storage was the acronym SHA ever used.  There is no SHA usage anywhere in the above processes, at all. 

 

NETWORK STUFF

Now passing authentication material across the network is an entirely different situation!

I’ll start with the bad news.  Remember earlier when I talked about brute-forcing the decrypted NT hash of another user?  Well that last step is often not even necessary.  NT hashes are password-equivalent, meaning that if I give Windows your hash, it’s as good as giving Windows your password in certain scenarios.  I don’t even need to know what your actual password is.  This is the pass-the-hash attack that you might have heard of.  But it’s not as bad as it sounds.

The good news is that neither Windows nor Active Directory ever sends your bare NT hash over the wire during network transmission.  And you cannot begin a pass-the-hash attack until you’ve already taken over administrative control of some domain joined machine.  That means there is no such thing as using pass-the-hash to own an entire networked AD environment just from an anonymous observer sniffing network packets.  That’s not how it works.  (Fun video on PtH)

Now that we know what an NT hash is, it’s a good time to draw the distinction that whenever we talk about specifically NTLMv1 and NTLMv2, we’re not actually talking about NT hashes anymore.  We’re talking about network communication protocols.  The whole mess is often just called NTLM as a blanket term because it’s all implemented by Microsoft products and it’s all interrelated.

Both NTLMv1 and NTLMv2 are challenge-response protocols, where Client and Server challenge and respond with each other such that Server can prove that Client knows what his/her password is, without ever actually sending the password or its hash directly over the network.  This is because Server already knows Client’s password (hash), either because it’s stored in the local SAM in the case of local Windows accounts, or because Server can forward Client’s data to a domain controller, and the domain controller can verify and respond to Server with “Yep, that’s his password, alright.”

With NTLMv1, you’ll see some usage of DES during network communication. 

With NTLMv2, you’ll see some usage of HMAC-MD5.

There’s also NTLM2 Session, aka NTLMv2 With Session Security, but it uses the same encryption and hashing algorithms as NTLMv2.

It is possible to completely remove the usage of NTLM network protocols from an Active Directory domain and go pure Kerberos, but it will break many applications.  Here is a fantastic article written by one of my favorite Microsoft employees about doing just that.

So let’s assume that hypothetically we blocked all usage of NTLM network protocols and went pure Kerberos. Kerberos in AD supports only the following encryption:

DES_CBC_CRC    (Disabled by default as of Win7/2008R2)[Source]
DES_CBC_MD5    (Disabled by default as of Win7/2008R2)
RC4_HMAC_MD5   (Disabled by default as of Win7/2008R2)
AES256-CTS-HMAC-SHA1-96
AES128-CTS-HMAC-SHA1-96
RC4-HMAC
Future encryption types

 

Of course, there are plenty of other Windows applications that pass authentication traffic over the network besides just AD.  Remote Desktop is a great example.  Remote Desktop traditionally uses RC4, but modern versions of Remote Desktop will negotiate a Transport Layer Security (TLS) connection wherever possible.  (Also known as Network Level Authentication (NLA).)   This is great news because this TLS connection uses the computer’s digital certificate, and that certificate can be automatically created and assigned to the computer by an Enterprise Certificate Authority, and that certificate can be capable of SHA256, SHA384, etc.  However the Certificate Authority administrator defines it.

If you turn on FIPS mode, Remote Desktop can only use TLS 1.0 (as opposed to SSL) when NLA is negotiated, and it can only use 3DES_CBC instead of RC4 when TLS is not negotiated.

Other ciphers that are turned off when FIPS mode is turned on include:

-          TLS_RSA_WITH_RC4_128_SHA

-          TLS_RSA_WITH_RC4_128_MD5

-          SSL_CK_RC4_128_WITH_MD5

-          SSL_CK_DES_192_EDE3_CBC_WITH_MD5

-          TLS_RSA_WITH_NULL_MD5

-          TLS_RSA_WITH_NULL_SHA

 

That will apply to all applications running on Windows that rely upon Schannel.dll.  The application will crash if it calls upon one of the above ciphers when FIPS mode is enabled.

So anyway, that’s about all I got right now.  If you made it to the bottom of  this post I should probably buy you a drink!

 

FIPS 140

by Ryan 29. October 2013 21:23

FIPS 140-2 Logo

Oh yeah, I have a blog! I almost forgot.  I've been busy working.  Let's talk about an extraordinarily fascinating topic: Federal compliance!

FIPS (Federal Information Processing Standard) has many different standards.  FIPS holds sway mainly in the U.S. and Canada.  Within each standard, there are multiple revisions and multiple levels of classification.  FIPS 140 is about encryption and hashing algorithms.  It’s about accrediting cryptographic modules.  Here’s an example of a certificate.  The FIPS 140-2 revision is the current standard, and FIPS 140-3 is under development with no announced release date yet.  It does not matter if your homebrew cryptography is technically “better” than anything else ever.  If your cryptographic module has not gone through the code submission and certification process, then it is not FIPS-approved.  You have to submit your source code/device/module to the government, in order to gain FIPS approval.  Even if you have the most amazing cryptography the world has ever seen, it is still not FIPS approved or compliant until it goes through the process.  In fact, the government is free to certify weaker algorithms in favor of stronger ones just because the weaker algorithms have undergone the certification process when the stronger ones have not, and they have historically done so.  (Triple-DES being the prime example.)

There is even a welcome kit, with stickers.  You need to put these tamper-proof stickers on your stuff for certain levels of FIPS compliance.

So if you are ever writing any software of your own, please do not try to roll your own cryptography. Use the approved libraries that have already gone through certification. Your custom crypto has about a 100% chance of being worse than AES/SHA (NSA backdoors notwithstanding,) and it will never be certifiable for use in a secure Federal environment anyway.  Also avoid things like re-hashing your hash with another hashing algorithm in attempt to be ‘clever’ – doing so can ironically make your hash weaker.

And the Feds are picky.  For instance, if programming for Windows in .NET, the use of System.Security.Cryptography.SHA1 classes may be acceptable while the use of System.Security.Cryptography.SHA1Managed classes are not acceptable.  It doesn’t mean the methods in the SHA1Managed classes are any worse, it simply means Microsoft has not submitted them for approval. 

Many major vendors such as Microsoft and Cisco go through this process for every new version of product that they release.  It costs money and time to get your product FIPS-certified.  Maybe it’s a Cisco ASA appliance, or maybe it’s a simple Windows DLL. 

The most recent publication of FIPS 140-2 Annex A lists approved security functions (algorithms.)  It lists AES and SHA-1 as acceptable, among others. So if your application uses only approved implementations of AES and SHA-1 algorithms, then that application should be acceptable according to FIPS 140-2.  If your application uses an MD5 hashing algorithm during communication, that product is NOT acceptable for use in an environment where FIPS compliance must be maintained. 

However, there is also this contradictory quote from NIST:

“The U.S. National Institute of Standards and Technology says, "Federal agencies should stop using SHA-1 for...applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010" [22]”

So it seems to me that there are contradictory government statements regarding the usage of security functions.  The most recent draft of FIPS 140-2 Annex A clearly lists SHA-1 as an acceptable hashing algorithm, yet, the quote from NIST says that government agencies must use only SHA-2 after 2010.  Not sure what the answer is to that. 

These algorithms can be broken up into two categories: encryption algorithms and hashing algorithms.  An example of a FIPS encryption algorithm is AES (which consists of three members of the Rijndael family of ciphers, adopted in 2001, and has a much cooler name.)  Encryption algorithms can be reversed/decrypted, that is, converted back into their original form from before they were encrypted.

Hashing algorithms on the other hand, are also known as one-way functions.  They are mathematically one-way and cannot be reversed.  Once you hash something, you cannot “un-hash” it, no matter how much computing power you have.  Hashing algorithms take any amount of data, of an arbitrary size, and mathematically map it to a “hash” of fixed length.  For instance, the SHA-256 algorithm will map any chunk of data, whether it be 10 bytes or 2 gigabytes, into a 256 bit hash.  Always 256 bit output, no matter the size of the input.

This is why the hash of a password is generally considered decently secure, because there is NO way to reverse the hash, so you can pass that hash to someone else via insecure means (e.g. over a network connection,) and if the other person knows what your password should be, then they can know that the hash you gave them proves that you know the actual password.  That's a bit of a simplification, but it gets the point across.

If you were trying to attack a hash, all you can do, if you know what hash algorithm was used, is to keep feeding that same hash algorithm new inputs, maybe millions or billions of new inputs a second, and hope that maybe you can reproduce the same hash.  If you can reproduce the same hash, then you know your input was the same as the original ‘plaintext’ that you were trying to figure out.  Maybe it was somebody’s password.  This is the essence of a ‘brute-force’ attack against a password hash.

Logically, if all inputs regardless of size, are mapped to a fixed size, then it stands to reason that there must be multiple sets of data that, when hashed, result in the same hash.  These are known as hash collisions.  They are very rare, but they are very bad, and collisions are the reason we needed to migrate away from the MD5 hashing algorithm, and we will eventually need to migrate away from the SHA-1 hashing algorithm.  (No collisions have been found in SHA-1 yet that I know of.)  Imagine if I could create a fake SSL certificate that, when I creatively flipped a few bits here and there, resulted in the same hash as a popular globally trusted certificate!  That would be very bad.

Also worth noting is that SHA-2 is an umbrella term, that includes SHA256, SHA384, SHA512, etc.

FIPS 140 is only concerned with algorithms used for external communication.  Any communication outside of the application or module, whether that be network communication, or communication to another application on the same system, etc.  FIPS 140 is not concerned with algorithms used to handle data within the application itself, within its own private memory, that never leaves the application and cannot be accessed by unauthorized users.  Here is an excerpt from the 140-2 standard to back up my claim:

“Cryptographic keys stored within a cryptographic module shall be stored either in plaintext form or encrypted form. Plaintext secret and private keys shall not be accessible from outside the cryptographic module to unauthorized operators…”

Let’s use Active Directory as an example.  This is why, when someone gets concerned about what algorithms AD uses internally, you should refer them to the above paragraph and tell them not to worry about it.  Even if it were plaintext (it’s not, but even if hypothetically it were,) it isn’t in scope for FIPS because it is internal only to the application.  When Active Directory and its domain members are operated in FIPS mode, connections made via Schannel.dll, Remote Desktop, etc., will only use FIPS compliant algorithms. If you had applications before that make calls to non-FIPS crypto libraries, those applications will now crash.

Another loophole that has appeared to satisfy FIPS requirements in the past, is wrapping a weaker algorithm inside of a stronger one.  For instance, a classic implementation of the RADIUS protocol utilizes the MD5 hashing algorithm during network communications.  MD5 is a big no-no.  However, see this excerpt from Cisco:

“RADIUS keywrap support is an extension of the RADIUS protocol. It provides a FIPS-certifiable means for the Cisco Access Control Server (ACS) to authenticate RADIUS messages and distribute session keys.”

So by simply wrapping weaker RADIUS keys inside of AES, it becomes FIPS-certifiable once again.  It would seem to follow that this logic also applies when using TLS and IPsec, as they are able to use very strong algorithms (such as SHA-2) that most applications do not natively support.

So with all that said, if you need the highest levels of network security, you need 802.1x and IPsec if you need to protect all those applications that can't protect themselves.

Generating Certificate Requests With Certreq

by Ryan 18. September 2013 08:01

Hey there,

SSL/TLS and the certificates it comes with are becoming more ubiquitous every day.  The system is not without its flaws, (BEAST, hash collision attacks, etc.,) but it's still generally regarded as "pretty good," and it's downright mandatory in any network that needs even a modicum of security.

One major downside is the administrative burden of having to keep track of and renew all those certificates, but Active Directory Certificate Services does a wonderful job of automating a lot of that away.  Many Windows administrator's lives would be a living hell if it weren't for Active Directory-integrated auto-enrollment.

But sometimes you don't always have the pleasure of working with an Enterprise CA. Sometimes you need to manually request a certificate from a non-Microsoft certificate authority, or a CA that is kept offline, etc.  Most people immediately start thinking about OpenSSL, which is a fine, multiplatform open-source tool.  But I usually seek out native tools that I already have on my Windows servers before I go download something off the internet that duplicates functionality that already comes with Windows.

Which brings me to certreq.  I use this guy to generate CSRs (certificate requests) when I need to submit one to a CA that isn't part of my AD forest or cannot otherwise be used in an auto-enrollment scenario. First paste something like this into an *.inf file:

;----------------- csr.inf -----------------
[Version]
Signature="$Windows NT$

[NewRequest]
Subject = "CN=web01.contoso.com, O=Contoso LLC, L=Redmond, S=Washington, C=US" 
KeySpec = 1
KeyLength = 2048
; Can be 1024, 2048, 4096, 8192, or 16384.
; Larger key sizes are more secure, but have
; a greater impact on performance.
Exportable = TRUE
MachineKeySet = TRUE
SMIME = False
PrivateKeyArchive = FALSE
UserProtected = FALSE
UseExistingKeySet = FALSE
ProviderName = "Microsoft RSA SChannel Cryptographic Provider"
ProviderType = 12
RequestType = PKCS10
KeyUsage = 0xa0

[EnhancedKeyUsageExtension]
OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication
;-----------------------------------------------

Then, run the command:

C:\> certreq -new csr.inf web01.req

And certreq will take the settings from the INF file that you created and turn them into a CSR with a .req extension.  The certreq reference and syntax, including all the various parameters that you can include in your INF file is right here. It's at this moment that the private key associated with this request is generated and stored, but it is not stored within the CSR so you don't have to worry about securely transporting the CSR.

Now you can submit that CSR to the certificate authority. Once the certificate authority has approved your request, they'll give you back a PEM or a CER file. If your CA gives you a PEM file, just rename it to CER.  The format is the same.  Remember that only the computer that generated the CSR has the private key for this certificate, so the request can only be completed on that computer.  To install the certificate, run:

C:\> certreq -Accept certificate.cer

Now you should see the certificate in the computer's certificate store, and the little key on the icon verifies that you do have the associated private key to go along with it.

So there you have it.  See you next time!

Powershell RemoteSigned and AllSigned Policies, Revisited

by Ryan 4. September 2013 12:35

A while back I talked about Powershell signing policies here.  Today I'm revisiting them.

Enabling script signing is generally a good idea for production environments. Things like script signing policies, using Windows Firewall, etc., all add up to provide a more thorough defense-in-depth strategy.

On the other hand, one negative side-effect that I've seen from enforcing Powershell signing policies is that it breaks the Best Practices Analyzers in Windows Server, since the BPAs apparently use scripts that are unsigned. Which is strange, since Microsoft is usually very good about signing everything that they release. I assume that they've since fixed that.

I'd consider the biggest obstacle to using only signed Powershell scripts to be one of discipline. But maybe that in itself is a good thing - if only the admins and engineers who are disciplined enough to put their digital signature on something are allowed to run PS scripts in production, perhaps that will cut down on the incidents of wild cowboys running ill-prepared scripts in your environment, or making a quick change on the fly to an important production script, etc. Could you imagine having to re-sign your script every time you changed a single line? That seems like the perfect way to encourage the behavior that scripts are first perfected in a lab, and only brought to production once they're fully baked and signed.

The next obstacle is getting everyone their own code signing certificate. This means you either need to spend some money getting a boat-load of certs from a public CA for all your employees, or you need to maintain your own PKI in your own Active Directory forest(s) for internal-use-only certificates.  This part alone is going to disqualify many organizations. Very rare is the organization that cares about properly signing things in their IT infrastructure.  Even rarer is the organization that actually does it, as opposed to just saying they want everything to be properly signed.  "It's just too much hassle... and now you're asking me to sign all my scripts, too?"

I also want to reinforce this point: Like UAC, Powershell script execution policies are not meant to be relied upon as a strong security measure. Microsoft does not tout them as such. They're meant to prevent you from making mistakes and doing things on accident. Things like UAC and PS script execution policies will keep honest people honest and non-administrators from tearing stuff up.  An AllSigned execution policy can also thwart unsophisticated attempts to compromise your security by preventing things such as modifying your PS profile without your knowledge to execute malicious code the next time you launch Powershell. But execution policies are no silver bullet. They are simply one more thin layer in your overall security onion.

So now let's play Devil's advocate. We already know the RemoteSigned policy should be a breeze to bypass just by clearing the Zone.Identifier alternate NTFS data stream. How do we bypass an Allsigned policy?

PS C:\> .\script.ps1
.\script.ps1 : File C:\script.ps1 cannot be loaded. The file C:\script.ps1 is not digitally signed. The script will not execute on the system.

Script won't execute?  Do your administrator's script execution policies get you down?  No problem:

  • Open Notepad.
  • Paste the following line into Notepad:
Powershell.exe -NoProfile -Command "Powershell.exe -NoProfile -EncodedCommand ([Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes((Get-Content %1 | %% {$_} | Out-String))))"
  • Save the file as bypass.bat.
  • Run your script by passing it as a parameter to bypass.bat.

PS C:\> .\bypass.bat .\script.ps1

... And congratulations, you just bypassed Powershell's execution policy as a non-elevated user.

So in conclusion, even after showing you how easy it is to bypass the AllSigned execution policy, I still recommend that good execution policies be enforced in production environments. Powershell execution policies are not meant to foil hackers.

  1. They're meant to be a safeguard against well-intentioned admins accidentally running scripts that cause unintended consequences.
  2. They verify the authenticity of a script. We know who wrote it and we know that it has not been altered since they signed it.
  3. They encourage good behavior by making it difficult for admins and engineers to lackadaisically run scripts that could damage a sensitive environment.

ShareDiscreetlyWebServer v1.0.1.2

by Ryan 13. April 2013 16:47

Several improvements over the last release the past few days:

  • Some code optimizations. Pages are rendering about an order of magnitude faster now.
  • Just about all the HTML and Javascript has been exported to editable files so that an administrator can change up the code, color schemes, branding, etc., without needing to recompile the code.
  • The server can now send an S/MIME, digitally signed email to the person you want to send the URL to. Unfortunately some email clients (such as the Gmail web client) don't natively understand S/MIME, but Outlook handles it just fine. You can also get various plugins and programs to read S/MIME emails if your email client doesn't understand it. I'd rather lean toward more security-related bells and whistles than max compatibility for this project.

You can access the secret server at https://myotherpcisacloud.com.

ShareDiscreetlyWebServer v1.0.0.3

by Ryan 9. April 2013 13:17

I wrote a web service.  I call it "ShareDiscreetly".  Creative name, huh?

I wrote the server in C# .NET 4.0.  It runs as a Windows service.

ShareDiscreetlyWebServer serves a single purpose: to allow two people to share little bits of information - secrets - such as passwords, etc., in a secure, discreet manner.  The secrets are protected both in transit and at rest, using the FIPS-approved AES-256 algorithm with asymmetric keys supplied by an X.509 certificate.

Oh, and I made sure that it's thoroughly compatible with Powershell so that the server can be used in a scriptable/automatable way.

You can read a more thorough description of the server as you try it out here.

Please let me know if you find any bugs, exploits, or if you have any feature requests!

About Me

Name: Ryan Ries
Location: Texas, USA
Occupation: Systems Engineer 

I am a Windows engineer and Microsoft advocate, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST


MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.