UNC Hardening

A couple months ago, Microsoft published a couple of Windows patches to address some vulnerabilities found in the way that Windows machines access UNC paths over the network.

MS15-011

MS15-014

Guidance on Deployment of MS15-011 and MS15-014 by AskPFE Platforms

This is essentially another man-in-the-middle style SMB hijack, and these types of attacks have been well-known for a long time, maybe second only behind pass the hash stuff.  One of the countermeasures that we admins have had for years to help combat these sorts of SMB proxy attacks, is SMB signing:

Of course I'd recommend enabling this everywhere - on both domain controllers and domain members - but that's no longer quite enough.  Security researchers found a way of bypassing or disabling SMB signing, which is what prompted Microsoft to release those two security patches I mentioned above.  One of those hotfixes comes with a new Group Policy configuration setting, called UNC Hardening.

You can find this new setting in Computer Configuration > Policies > Administrative Templates > Network > Network Provider:

So keep in mind that just applying the patch alone doesn't award you any of the benefits of Hardened UNC Paths.  There is additional GPO configuration you must do to enable it.

In the GPO, an admin would specify the types of UNCs that he or she wanted to harden, so that when a client connects to a UNC that matches a certain pattern, that client applies additional security policies to that connection.

Wildcards are supported, but you must supply either a server name or share name, so no, you cannot do \\* or \\*\*.

To get the two most important UNC paths in an Active Directory domain, you'd configure the GPO thusly:

\\*\NETLOGON  RequireMutualAuthentication=1, RequireIntegrity=1
\\*\SYSVOL    RequireMutualAuthentication=1, RequireIntegrity=1

This additional layer of security costs very little, relative to the benefit of ensuring all your Windows clients will only connect to genuine, mutually authenticated domain controllers to get their Group Policies and logon scripts.  Especially if you have mobile clients on the go that connect from coffee shops and hotels!

Testing Authenticated NTP Configurations

It's been a long time since I posted, I know. I've been busy both with actual work, and also working on a personal project that involves getting way better at old-school C programming. I've been following Casey Muratori and his Handmade Hero series for a couple of months now, and I've been really inspired to write more C.

More on what I've been working on as it develops.

Also, my friend and co-conspirator Wesley sent me a gift for accomplishing the "Serverfault 10k Challenge" in 2014:

This majestic creature embodies a never-ending thirst for knowledge and symbolizes the triumphs and tribulations of the sysadmin.  It's not a Microsoft MVP award...

... it's better.

Alright so the topic of today's post: Authenticated NTP.  NTP is one of the very oldest protocols on the internet, it has had very few vulnerabilities reported over its 30+ year lifespan, and is ubiquitous in virtually every computer network on the planet. (Because most computers are awful at keeping time.) There are many different versions of it and spinoffs from the reference implementation. People tend to find NTP a boring protocol, but it's one of my favorite internet protocols.  Most people just point their router at pool.ntp.org and never think about NTP again.

Until the day comes that you want to enable authentication in your NTP architecture. Authentication is the mechanism that allows a message recipient to verify that the response came from the intended sender and wasn't tampered with in transit. Within a Windows Active Directory domain, we already have this. Domain members use the Kerberos session keys they already have to create authenticated Windows Time messages with domain controllers.  But that means if you're not a member of the domain, you can't participate.

The administrator of an NTP server who wishes to send you authenticated NTP messages will probably send you an ntp.keys file, or at least a password for you to use. An example ntp.keys file looks like this:

# ntpkey_MD5key_ntp.domain.com.2825130701
# Thu Jan 15 20:00:01 2015
 1 MD5  JzF&f})0ocK1{H9	# MD5 key
 2 MD5  Dv(0v@W8vJ8%#*2	# MD5 key
 3 MD5  N(BzeyvYx$qzs5]	# MD5 key
 4 MD5  TVd2*DXtu-mewLs	# MD5 key
 5 MD5  F9UTa)8AQ9O9561	# MD5 key
 6 MD5  F9}{%$d9vs3Dpxb	# MD5 key
 7 MD5  D]Z*OOr56ukpiD6	# MD5 key
 8 MD5  TTr$OIR9+f74J28	# MD5 key
 9 MD5  EC3F9Zr%-3190&0	# MD5 key
10 MD5  Ndi5+]F^3x3Gdeb	# MD5 key
11 MD5  S+27&8(ba30qM@5	# MD5 key
12 MD5  CnO8)=CyG)QBj]}	# MD5 key
13 MD5  Em62oK!RXhw#y9_	# MD5 key
14 MD5  K-l(^UE@&T(Zj5B	# MD5 key
15 MD5  Gcff1nJb(CuF$*!	# MD5 key
16 MD5  W-*5^xbp3@v8br)	# MD5 key

There aren't any tools that ship with Windows that will help you test this stuff.  The Windows implementation of NTP ("Windows Time") is good enough to keep Windows working within the context of an AD domain, but it's not a full-featured reference NTP implementation.  So, I downloaded the Windows port of NTP from ntp.org. You can use the ntpdate program to query an NTP server using the shared secrets in your ntp.keys file to verify that authentication is successful:

C:\Users\Ryan>ntpdate.exe -b -d -k C:\Users\Ryan\ntp.keys -a 1 ntp.domain.com
...
transmit(70.144.88.104)
receive(70.144.88.104)
receive: authentication passed
transmit(70.144.88.104)
receive(70.144.88.104)
receive: authentication passed
...
server 70.144.88.104, port 123
stratum 2, precision -20, leap 00, trust 000
refid [70.144.88.104], delay 0.03090, dispersion 0.00066
transmitted 4, in filter 4
...
Authenticated NTP: Check. I'll probably write more about this topic in the future, but I have to perform some experiments first.

Verifying RPC Network Connectivity Like A Boss

Aloha.  I did some fun Powershelling yesterday and now it's time to share.

If you work in an IT environment that's of any significant size, chances are you have firewalls.  Maybe lots and lots of firewalls. RPC can be a particularly difficult network protocol to work with when it comes to making sure all the ports necessary for its operation are open on your firewalls. I've found that firewall guys sometimes have a hard time allowing the application guy's RPC traffic through their firewalls because of its dynamic nature. Sometimes the application guys don't really know how RPC works, so they don't really know what to ask of the firewall guys.  And to make it even worse, RPC errors can be hard to diagnose.  For instance, the classic RPC error 1722 (0x6BA) - "The RPC server is unavailable" sounds like a network problem at first, but can actually mean access denied, or DNS resolution failure, etc.

MSRPC, or Microsoft Remote Procedure Call, is Microsoft's implementation of DCE (Distributed Computing Environment) RPC. It's been around a long time and is pervasive in an environment containing Windows computers. Tons of Windows applications and components depend on it.

A very brief summary of how the protocol works: There is an "endpoint mapper" that runs on TCP port 135. You can bind to that port on a remote computer anonymously and enumerate all the various RPC services available on that computer.  The services may be using named pipes or TCP/IP.  Named pipes will use port 445.  The services that are using TCP are each dynamically allocated their own TCP ports, which are drawn from a pool of port numbers. This pool of port numbers is by default 1024-5000 on XP/2003 and below, and 49152-65535 on Vista/2008 and above. (The ephemeral port range.) You can customize that port range that RPC will use if you wish, like so:

reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v Ports /t REG_MULTI_SZ /f /d 8000-9000
reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v PortsInternetAvailable /t REG_SZ /f /d Y
reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v UseInternetPorts /t REG_SZ /f /d Y

And/Or

netsh int ipv4 set dynamicport tcp start=8000 num=1001
netsh int ipv4 set dynamicport udp start=8000 num=1001
netsh int ipv6 set dynamicport tcp start=8000 num=1001
netsh int ipv6 set dynamicport udp start=8000 num=1001

This is why we have to query the endpoint mapper first, because we can't just guess exactly which port we need to connect to for a particular service.

So, I wrote a little something in Powershell that will test the network connectivity of a remote machine for RPC, by querying the endpoint mapper, and then querying each port that the endpoint mapper tells me that it's currently using.


#Requires -Version 3
Function Test-RPC
{
    [CmdletBinding(SupportsShouldProcess=$True)]
    Param([Parameter(ValueFromPipeline=$True)][String[]]$ComputerName = 'localhost')
    BEGIN
    {
        Set-StrictMode -Version Latest
        $PInvokeCode = @'
        using System;
        using System.Collections.Generic;
        using System.Runtime.InteropServices;

        public class Rpc
        {
            // I found this crud in RpcDce.h

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcBindingFromStringBinding(string StringBinding, out IntPtr Binding);

            [DllImport("Rpcrt4.dll")]
            public static extern int RpcBindingFree(ref IntPtr Binding);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcMgmtEpEltInqBegin(IntPtr EpBinding,
                                                    int InquiryType, // 0x00000000 = RPC_C_EP_ALL_ELTS
                                                    int IfId,
                                                    int VersOption,
                                                    string ObjectUuid,
                                                    out IntPtr InquiryContext);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcMgmtEpEltInqNext(IntPtr InquiryContext,
                                                    out RPC_IF_ID IfId,
                                                    out IntPtr Binding,
                                                    out Guid ObjectUuid,
                                                    out IntPtr Annotation);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcBindingToStringBinding(IntPtr Binding, out IntPtr StringBinding);

            public struct RPC_IF_ID
            {
                public Guid Uuid;
                public ushort VersMajor;
                public ushort VersMinor;
            }

            public static List QueryEPM(string host)
            {
                List ports = new List();
                int retCode = 0; // RPC_S_OK                
                IntPtr bindingHandle = IntPtr.Zero;
                IntPtr inquiryContext = IntPtr.Zero;                
                IntPtr elementBindingHandle = IntPtr.Zero;
                RPC_IF_ID elementIfId;
                Guid elementUuid;
                IntPtr elementAnnotation;

                try
                {                    
                    retCode = RpcBindingFromStringBinding("ncacn_ip_tcp:" + host, out bindingHandle);
                    if (retCode != 0)
                        throw new Exception("RpcBindingFromStringBinding: " + retCode);

                    retCode = RpcMgmtEpEltInqBegin(bindingHandle, 0, 0, 0, string.Empty, out inquiryContext);
                    if (retCode != 0)
                        throw new Exception("RpcMgmtEpEltInqBegin: " + retCode);
                    
                    do
                    {
                        IntPtr bindString = IntPtr.Zero;
                        retCode = RpcMgmtEpEltInqNext (inquiryContext, out elementIfId, out elementBindingHandle, out elementUuid, out elementAnnotation);
                        if (retCode != 0)
                            if (retCode == 1772)
                                break;

                        retCode = RpcBindingToStringBinding(elementBindingHandle, out bindString);
                        if (retCode != 0)
                            throw new Exception("RpcBindingToStringBinding: " + retCode);
                            
                        string s = Marshal.PtrToStringAuto(bindString).Trim().ToLower();
                        if(s.StartsWith("ncacn_ip_tcp:"))                        
                            ports.Add(int.Parse(s.Split('[')[1].Split(']')[0]));
                        
                        RpcBindingFree(ref elementBindingHandle);
                        
                    }
                    while (retCode != 1772); // RPC_X_NO_MORE_ENTRIES

                }
                catch(Exception ex)
                {
                    Console.WriteLine(ex);
                    return ports;
                }
                finally
                {
                    RpcBindingFree(ref bindingHandle);
                }
                
                return ports;
            }
        }
'@
    }
    PROCESS
    {
        ForEach($Computer In $ComputerName)
        {
            If($PSCmdlet.ShouldProcess($Computer))
            {
                [Bool]$EPMOpen = $False
                $Socket = New-Object Net.Sockets.TcpClient
                
                Try
                {                    
                    $Socket.Connect($Computer, 135)
                    If ($Socket.Connected)
                    {
                        $EPMOpen = $True
                    }
                    $Socket.Close()                    
                }
                Catch
                {
                    $Socket.Dispose()
                }
                
                If ($EPMOpen)
                {
                    Add-Type $PInvokeCode
                    $RPCPorts = [Rpc]::QueryEPM($Computer)
                    [Bool]$AllPortsOpen = $True
                    Foreach ($Port In $RPCPorts)
                    {
                        $Socket = New-Object Net.Sockets.TcpClient
                        Try
                        {
                            $Socket.Connect($Computer, $Port)
                            If (!$Socket.Connected)
                            {
                                $AllPortsOpen = $False
                            }
                            $Socket.Close()
                        }
                        Catch
                        {
                            $AllPortsOpen = $False
                            $Socket.Dispose()
                        }
                    }

                    [PSObject]@{'ComputerName' = $Computer; 'EndPointMapperOpen' = $EPMOpen; 'RPCPortsInUse' = $RPCPorts; 'AllRPCPortsOpen' = $AllPortsOpen}
                }
                Else
                {
                    [PSObject]@{'ComputerName' = $Computer; 'EndPointMapperOpen' = $EPMOpen}
                }
            }
        }
    }
    END
    {

    }
}

And the output will look a little something like this:


You can also query the endpoint mapper with PortQry.exe -n server01 -e 135, but I was curious about how it worked at a deeper level, so I ended up writing something myself. There weren't many examples of how to use that particular native API, so it was pretty tough.

More Windows and AD Cryptography Mumbo-Jumbo

I've still had my head pretty deep into cryptography and hashing as far as Windows and Active Directory is concerned, and I figured it was worth putting here in case you're interested.  We're going to talk about things like NTLM and how Windows stores hashes, and more.

The term NTLM is a loaded one, as the acronym is often used to refer to several different things.  It not only refers to Microsoft’s implementation of another standard algorithm for creating hashes, but it also refers to a network protocol.  The NTLM used for storing password hashes on disk (aka NT hash) is a totally different thing than the NTLM used to transmit authentication data across a TCP/IP network.  There’s the original LAN Manager protocol, which is worse than NT LAN Manager (NTLM or NTLMv1,) which is worse than NTLMv2, which is worse than NTLMv2 with Session Security and so on…  but an NT hash is an NT hash is an NT hash.  When we refer to either NTLMv1 or NTLMv2 specifically, we’re not talking about how the password gets stored on disk, we’re talking about network protocols.

Also, this information refers to Vista/2008+ era stuff.  I don’t want to delve into the ancient history Windows NT 4 or 2000, so let’s not even discuss LAN Manager/LM.  LM hashes are never stored or transmitted, ever, in an environment that consists of Vista/2008+ stuff.  It’s extinct.

Unless some bonehead admin purposely turned it back on.  In which case, fire him/her.

Moving on…

LOCAL MACHINE STUFF

So here we talk about what goes on in a regular Windows machine with no network connection.  No domain.  All local stuff. No network.

"SO REALLY, WTF IS AN NT HASH!?"

You might ask.  An NT hash is simply the MD4 hash of the little endian UTF-16 encoded plaintext input.  So really it’s MD4.

"So the Windows Security Accounts Manager (SAM) stores MD4 hashes of passwords to disk, then?"

Well no, not directly.  The NT hashes, before being stored, are encrypted with RC4 using the machine’s "boot key," which is both hard to get at as well as being unique to each OS install.  By "hard to get at," I mean that the boot key is scattered across several areas of the registry that require system level access and you have to know how to read data out of the registry that cannot be seen in Regedit, even if you run it as Local System. And it must be de-obfuscated on top of that.  After hashing the boot key and other bits of data with MD5, you then create another RC4 key from the MD5 hash of the hashed bootkey, plus some more information such as the specific user's security identifier.

So the final answer is that Windows stores local SAM passwords to disk in the registry as RC4-encrypted MD4 hashes using a key that is unique to every machine and is difficult to extract and descramble, unless you happen to be using one of the dozen or so tools that people have written to automate the process.

Active Directory is a different matter altogether.  Active Directory does not store domain user passwords in a local SAM database the same way that a standalone Windows machine stores local user passwords.  Active Directory stores those password hashes in a file on disk named NTDS.dit.  The only password hash that should be stored in the local SAM of a domain controller is the Directory Services Restore Mode password.  The algorithms used to save passwords in NTDS.dit are much different than the algorithms used by standalone Windows machines to store local passwords.  Before I tell you what those algorithms are, I want to mention that the algorithms AD uses to store domain password hashes on disk should not be in scope to auditors, because NTDS.dit is not accessible by unauthorized users.  The operating system maintains an exclusive lock on it and you cannot access it as long as the operating system is running.  Because of that,  the online Directory Services Engine and NTDS.dit together should be treated as one self-contained ‘cryptographic module’ and as such falls under the FIPS 140-2 clause:

"Cryptographic keys stored within a cryptographic module shall be stored either in plaintext form or encrypted form.  Plaintext secret and private keys shall not be accessible from outside the cryptographic module to unauthorized operators…"

So even plaintext secrets are acceptable to FIPS 140, as long as they stay within the cryptographic module and cannot be accessed by or sent to outsiders.

Active Directory stores not only the hashed password of domain users, but also their password history.  This is useful for that “Remember the last 24 passwords” Group Policy setting.  So there are encrypted NT hashes stored in NTDS.dit.  Let’s just assume we have an offline NTDS.dit – again, this should not be of any concern to auditors – this is Microsoft proprietary information and was obtained through reverse engineering.  It’s only internal to AD.  FIPS should not be concerned with this because this all takes place "within the cryptographic module."  Access to offline copies of NTDS.dit should be governed by how you protect your backups.

To decrypt a hash in NTDS.dit, first you need to decrypt the Password Encryption Key (PEK) which is itself encrypted and stored in NTDS.dit.  The PEK is the same across all domain controllers, but it is encrypted using the boot key (yes the same one discussed earlier) which is unique on every domain controller.  So once you have recovered the bootkey of a domain controller (which probably means you have already completely owned that domain controller and thus the entire domain so I'm not sure why you'd even be doing this) you can decrypt the PEK contained inside of an offline copy of NTDS.dit that came from that same domain controller.  To do that, you hash the bootkey 1000 times with MD5 and then use that result as the key to the RC4 cipher.  The only point to a thousand rounds of hashing is to make a brute force attack more time consuming.

OK, so now you’ve decrypted the PEK.  So use that decrypted PEK, plus 16 bytes of the encrypted hash itself as key material for another round of RC4.  Finally, use the SID of the user whose hash you are trying to decrypt as the key to a final round of DES to uncover, at last, the NT (MD4) hash for that user.

Now you need to brute-force attack that hash.  Using the program ighashgpu.exe, which uses CUDA to enable all 1344 processing cores on my GeForce GTX 670 graphics card to make brute force attempts on one hash in parallel, I can perform over 4 billion attempts per second to eventually arrive at the original plaintext password of the user.  It doesn’t take long to crack an NT hash any more.

As a side-note, so-called "cached credentials" are actually nothing more than password verifiers.  They’re essentially a hash of a hash, and there is no reversible information contained in a "cached credential" or any information that is of any interest to an attacker.  "Cached credentials" pose no security concern, yet most security firms, out of ignorance, still insist that they be disabled.

So there you have it.  You might notice that nowhere in Windows local password storage or Active Directory password storage was the acronym SHA ever used.  There is no SHA usage anywhere in the above processes, at all. 

 

NETWORK STUFF

Now passing authentication material across the network is an entirely different situation!

I’ll start with the bad news.  Remember earlier when I talked about brute-forcing the decrypted NT hash of another user?  Well that last step is often not even necessary.  NT hashes are password-equivalent, meaning that if I give Windows your hash, it’s as good as giving Windows your password in certain scenarios.  I don’t even need to know what your actual password is.  This is the pass-the-hash attack that you might have heard of.  But it’s not as bad as it sounds.

The good news is that neither Windows nor Active Directory ever sends your bare NT hash over the wire during network transmission.  And you cannot begin a pass-the-hash attack until you’ve already taken over administrative control of some domain joined machine.  That means there is no such thing as using pass-the-hash to own an entire networked AD environment just from an anonymous observer sniffing network packets.  That’s not how it works.  (Fun video on PtH)

Now that we know what an NT hash is, it’s a good time to draw the distinction that whenever we talk about specifically NTLMv1 and NTLMv2, we’re not actually talking about NT hashes anymore.  We’re talking about network communication protocols.  The whole mess is often just called NTLM as a blanket term because it’s all implemented by Microsoft products and it’s all interrelated.

Both NTLMv1 and NTLMv2 are challenge-response protocols, where Client and Server challenge and respond with each other such that Server can prove that Client knows what his/her password is, without ever actually sending the password or its hash directly over the network.  This is because Server already knows Client’s password (hash), either because it’s stored in the local SAM in the case of local Windows accounts, or because Server can forward Client’s data to a domain controller, and the domain controller can verify and respond to Server with “Yep, that’s his password, alright.”

With NTLMv1, you’ll see some usage of DES during network communication. 

With NTLMv2, you’ll see some usage of HMAC-MD5.

There’s also NTLM2 Session, aka NTLMv2 With Session Security, but it uses the same encryption and hashing algorithms as NTLMv2.

It is possible to completely remove the usage of NTLM network protocols from an Active Directory domain and go pure Kerberos, but it will break many applications.  Here is a fantastic article written by one of my favorite Microsoft employees about doing just that.

So let’s assume that hypothetically we blocked all usage of NTLM network protocols and went pure Kerberos. Kerberos in AD supports only the following encryption:

DES_CBC_CRC    (Disabled by default as of Win7/2008R2)[Source]
DES_CBC_MD5    (Disabled by default as of Win7/2008R2)
RC4_HMAC_MD5   (Disabled by default as of Win7/2008R2)
AES256-CTS-HMAC-SHA1-96
AES128-CTS-HMAC-SHA1-96
RC4-HMAC
Future encryption types

 

Of course, there are plenty of other Windows applications that pass authentication traffic over the network besides just AD.  Remote Desktop is a great example.  Remote Desktop traditionally uses RC4, but modern versions of Remote Desktop will negotiate a Transport Layer Security (TLS) connection wherever possible.  (Also known as Network Level Authentication (NLA).)   This is great news because this TLS connection uses the computer’s digital certificate, and that certificate can be automatically created and assigned to the computer by an Enterprise Certificate Authority, and that certificate can be capable of SHA256, SHA384, etc.  However the Certificate Authority administrator defines it.

If you turn on FIPS mode, Remote Desktop can only use TLS 1.0 (as opposed to SSL) when NLA is negotiated, and it can only use 3DES_CBC instead of RC4 when TLS is not negotiated.

Other ciphers that are turned off when FIPS mode is turned on include:

-          TLS_RSA_WITH_RC4_128_SHA

-          TLS_RSA_WITH_RC4_128_MD5

-          SSL_CK_RC4_128_WITH_MD5

-          SSL_CK_DES_192_EDE3_CBC_WITH_MD5

-          TLS_RSA_WITH_NULL_MD5

-          TLS_RSA_WITH_NULL_SHA

 

That will apply to all applications running on Windows that rely upon Schannel.dll.  The application will crash if it calls upon one of the above ciphers when FIPS mode is enabled.

So anyway, that’s about all I got right now.  If you made it to the bottom of  this post I should probably buy you a drink!

 

FIPS 140

FIPS 140-2 Logo

Oh yeah, I have a blog! I almost forgot.  I've been busy working.  Let's talk about an extraordinarily fascinating topic: Federal compliance!

FIPS (Federal Information Processing Standard) has many different standards.  FIPS holds sway mainly in the U.S. and Canada.  Within each standard, there are multiple revisions and multiple levels of classification.  FIPS 140 is about encryption and hashing algorithms.  It’s about accrediting cryptographic modules.  Here’s an example of a certificate.  The FIPS 140-2 revision is the current standard, and FIPS 140-3 is under development with no announced release date yet.  It does not matter if your homebrew cryptography is technically “better” than anything else ever.  If your cryptographic module has not gone through the code submission and certification process, then it is not FIPS-approved.  You have to submit your source code/device/module to the government, in order to gain FIPS approval.  Even if you have the most amazing cryptography the world has ever seen, it is still not FIPS approved or compliant until it goes through the process.  In fact, the government is free to certify weaker algorithms in favor of stronger ones just because the weaker algorithms have undergone the certification process when the stronger ones have not, and they have historically done so.  (Triple-DES being the prime example.)

There is even a welcome kit, with stickers.  You need to put these tamper-proof stickers on your stuff for certain levels of FIPS compliance.

So if you are ever writing any software of your own, please do not try to roll your own cryptography. Use the approved libraries that have already gone through certification. Your custom crypto has about a 100% chance of being worse than AES/SHA (NSA backdoors notwithstanding,) and it will never be certifiable for use in a secure Federal environment anyway.  Also avoid things like re-hashing your hash with another hashing algorithm in attempt to be ‘clever’ – doing so can ironically make your hash weaker.

And the Feds are picky.  For instance, if programming for Windows in .NET, the use of System.Security.Cryptography.SHA1 classes may be acceptable while the use of System.Security.Cryptography.SHA1Managed classes are not acceptable.  It doesn’t mean the methods in the SHA1Managed classes are any worse, it simply means Microsoft has not submitted them for approval. 

Many major vendors such as Microsoft and Cisco go through this process for every new version of product that they release.  It costs money and time to get your product FIPS-certified.  Maybe it’s a Cisco ASA appliance, or maybe it’s a simple Windows DLL. 

The most recent publication of FIPS 140-2 Annex A lists approved security functions (algorithms.)  It lists AES and SHA-1 as acceptable, among others. So if your application uses only approved implementations of AES and SHA-1 algorithms, then that application should be acceptable according to FIPS 140-2.  If your application uses an MD5 hashing algorithm during communication, that product is NOT acceptable for use in an environment where FIPS compliance must be maintained. 

However, there is also this contradictory quote from NIST:

“The U.S. National Institute of Standards and Technology says, "Federal agencies should stop using SHA-1 for...applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010" [22]”

So it seems to me that there are contradictory government statements regarding the usage of security functions.  The most recent draft of FIPS 140-2 Annex A clearly lists SHA-1 as an acceptable hashing algorithm, yet, the quote from NIST says that government agencies must use only SHA-2 after 2010.  Not sure what the answer is to that. 

These algorithms can be broken up into two categories: encryption algorithms and hashing algorithms.  An example of a FIPS encryption algorithm is AES (which consists of three members of the Rijndael family of ciphers, adopted in 2001, and has a much cooler name.)  Encryption algorithms can be reversed/decrypted, that is, converted back into their original form from before they were encrypted.

Hashing algorithms on the other hand, are also known as one-way functions.  They are mathematically one-way and cannot be reversed.  Once you hash something, you cannot “un-hash” it, no matter how much computing power you have.  Hashing algorithms take any amount of data, of an arbitrary size, and mathematically map it to a “hash” of fixed length.  For instance, the SHA-256 algorithm will map any chunk of data, whether it be 10 bytes or 2 gigabytes, into a 256 bit hash.  Always 256 bit output, no matter the size of the input.

This is why the hash of a password is generally considered decently secure, because there is NO way to reverse the hash, so you can pass that hash to someone else via insecure means (e.g. over a network connection,) and if the other person knows what your password should be, then they can know that the hash you gave them proves that you know the actual password.  That's a bit of a simplification, but it gets the point across.

If you were trying to attack a hash, all you can do, if you know what hash algorithm was used, is to keep feeding that same hash algorithm new inputs, maybe millions or billions of new inputs a second, and hope that maybe you can reproduce the same hash.  If you can reproduce the same hash, then you know your input was the same as the original ‘plaintext’ that you were trying to figure out.  Maybe it was somebody’s password.  This is the essence of a ‘brute-force’ attack against a password hash.

Logically, if all inputs regardless of size, are mapped to a fixed size, then it stands to reason that there must be multiple sets of data that, when hashed, result in the same hash.  These are known as hash collisions.  They are very rare, but they are very bad, and collisions are the reason we needed to migrate away from the MD5 hashing algorithm, and we will eventually need to migrate away from the SHA-1 hashing algorithm.  (No collisions have been found in SHA-1 yet that I know of.)  Imagine if I could create a fake SSL certificate that, when I creatively flipped a few bits here and there, resulted in the same hash as a popular globally trusted certificate!  That would be very bad.

Also worth noting is that SHA-2 is an umbrella term, that includes SHA256, SHA384, SHA512, etc.

FIPS 140 is only concerned with algorithms used for external communication.  Any communication outside of the application or module, whether that be network communication, or communication to another application on the same system, etc.  FIPS 140 is not concerned with algorithms used to handle data within the application itself, within its own private memory, that never leaves the application and cannot be accessed by unauthorized users.  Here is an excerpt from the 140-2 standard to back up my claim:

“Cryptographic keys stored within a cryptographic module shall be stored either in plaintext form or encrypted form. Plaintext secret and private keys shall not be accessible from outside the cryptographic module to unauthorized operators…”

Let’s use Active Directory as an example.  This is why, when someone gets concerned about what algorithms AD uses internally, you should refer them to the above paragraph and tell them not to worry about it.  Even if it were plaintext (it’s not, but even if hypothetically it were,) it isn’t in scope for FIPS because it is internal only to the application.  When Active Directory and its domain members are operated in FIPS mode, connections made via Schannel.dll, Remote Desktop, etc., will only use FIPS compliant algorithms. If you had applications before that make calls to non-FIPS crypto libraries, those applications will now crash.

Another loophole that has appeared to satisfy FIPS requirements in the past, is wrapping a weaker algorithm inside of a stronger one.  For instance, a classic implementation of the RADIUS protocol utilizes the MD5 hashing algorithm during network communications.  MD5 is a big no-no.  However, see this excerpt from Cisco:

“RADIUS keywrap support is an extension of the RADIUS protocol. It provides a FIPS-certifiable means for the Cisco Access Control Server (ACS) to authenticate RADIUS messages and distribute session keys.”

So by simply wrapping weaker RADIUS keys inside of AES, it becomes FIPS-certifiable once again.  It would seem to follow that this logic also applies when using TLS and IPsec, as they are able to use very strong algorithms (such as SHA-2) that most applications do not natively support.

So with all that said, if you need the highest levels of network security, you need 802.1x and IPsec if you need to protect all those applications that can't protect themselves.