An Ode to Server Fault

by Ryan 13. December 2013 08:20

Server FaultToday marks a momentous occasion, as I have finally attained 20,000 reputation on Server Fault!  20k is by no means the reputation limit, and there are still plenty of other badges to be earned as well, but it is the last reputation-based milestone in my journey.  It comes with the title of "Trusted User" and grants an extra layer of powers on the site just shy of full moderator power.

It took me almost two years and 490 answers to achieve it. 

In case you don't know, Server Fault is a question and answer site that I have referenced many times before on this blog. It is one site that is part of a larger network of question and answer sites known collectively as Stack Exchange.  Server Fault is specifically aimed at IT professionals.  People who work with servers and networks in an administrative, engineering or architectural capacity to support a business's IT operations.  It is not about programming, nor is it about the enthusiast user at home setting up a Linksys router... though the lines can sometimes be blurry.  People come and ask questions on the site, such as "Halp I broke Active Directory" or "How do I SharePoint?", and we gain reputation for providing answers to those questions and have the community vote on them based on the quality of our answers. (Or lose reputation if your answers suck!)

20k reputation is actually just a drop in the bucket on some other Stack Exchange sites such as Stack Overflow, but the difference is that Server Fault only gets a fraction of the traffic that Stack Overflow gets.  I've chosen to focus on SF as it's most closely aligned with my own professional ambitions and interests.

Q: Why did I choose Server Fault over the TechNet forums or Experts Exchange?

It's been long enough that I barely remember first stumbling upon the site, but I know I stumbled upon it while researching some problem with WSUS or DHCP or Active Directory or something like that.  The site's aesthetic design was very attractive, and the layout made sense to me and it was clean and neat.  The questions covered a wide range of interesting things that were right up my alley.  I liked the idea of being "rewarded" for giving people good answers and rewarding others for their insight. Even if the reputation is totally intangible and practically meaningless, it still gives me a sense of progression and of having earned something.  In a way, it makes a game out of answering people's questions.  I know that the TechNet forums does reputation too, but the website doesn't look and feel as nice or have as many features, the questions aren't usually as varied and interesting, and the community (both the askers and the answerers) generally seem lower caliber.  Serverfault is chock full of features, including a sweet chat room where you can go and shoot the bull with other sysadmin-type people.

I quickly signed up, and before I knew it I was visiting the site every day to see what types of technology people were discussing and if there were any questions there that I could answer.  And after I found out the site ran on a mainly Microsoft stack (IIS, .NET, MS SQL, etc.,) I was totally in love.

My very first answer on SF*My very first answer on SF. The question was from a Windows Server admin, asking what scripting language he should learn.*

One of the quirks about Server Fault that I wouldn't see on the TechNet forums is that there are a lot of questions about Unix and Unix/Linux applications too.  That's a challenge for me because, in case you haven't noticed, I'm a Microsoft evangelist.  But that doesn't mean I'm a Linux hater.  I know that it's a very solid platform used by millions of people around the world and I want to learn about it too.  Even though I tend to opt for using Microsoft platforms and tools, I also get to see other people bringing *nix and Microsoft tech together in fascinating ways, such as this guy, who is setting up 1400 Samba4-based Active Directory Read Only Domain Controllers!

Q: Why would I waste my time answering other people's questions on the internet?

Ah.  This is where it gets interesting, you see, because it's not a waste of time.  In fact, spending time on Server Fault keeps my skills sharp.  Being constantly exposed to new problems, and people applying technology in interesting ways that I had never thought about and running into new types of issues that I had never needed to solve before.  Spending time on Server Fault is an investment in myself.  I know more about my industry because of that site.  It happens again and again that I'll end up reading 3000-word TechNet articles and digging through MSDN documentation on the Active Directory schema just in order to be able to answer someone else's Server Fault question.  That's personal enrichment.

And more importantly, I've made friends there.  People that I've had the pleasure of talking to over the phone and doing business with in real life.  I've stayed up many late, alcohol-fueled nights in the chat room with these guys talking about everything from FusionIO cards to the U.S. Constitution to why I should quit my job and go work with Mark. ;)

In fact, I'm hoping to meet up with some of these guys at TechEd 2014!

 

IPv4Address Attribute In Get-ADComputer

by Ryan 2. December 2013 12:00

Guten Tag, readers!

Administrators who use Microsoft's Active Directory module for Powershell are most likely familiar with the Get-ADComputer cmdlet.  This cmdlet retrieves information from the Active Directory database about a given computer object.  Seems pretty straightforward, but recently I started wondering about something in Get-ADComputer's output:

Get-ADComputer IPv4Address

IPv4Address?  I don't recall that data being stored in Active Directory... well, not as an attribute of the computer objects themselves, anyway.  If you take a look at a computer object with ADSI Edit, the closest thing you'll find is an ipHostNumber attribute, but it appears to not be used:

ADSI Edit Computer Properties

Hmm... well, by this point, if you're anything like me, you're probably thinking that a DNS query is about the only other way that the cmdlet could be getting this data.  But I wasn't satisfied with just saying "it's DNS, dummy," and forgetting about it.  I wanted to know exactly what was going on under the hood.

So I started by disassembling the entire Microsoft.ActiveDirectory.Management assembly.  (How did I know which assembly to look for?)

After searching the resulting source code for ipv4, it started to become quite clear.  From Microsoft.ActiveDirectory.Management.Commands.ADComputerFactory<T>:

internal static void ToExtendedIPv4(string extendedAttribute, string[] directoryAttributes, ADEntity userObj, ADEntity directoryObj, CmdletSessionInfo cmdletSessionInfo)
{
  if (directoryObj.Contains(directoryAttributes[0]))
  {
    string dnsHostName = directoryObj[directoryAttributes[0]].Value as string;
    userObj.Add(extendedAttribute, (object) IPUtil.GetIPAddress(dnsHostName, IPUtil.IPVersion.IPv4));
  }
  else
    userObj.Add(extendedAttribute, new ADPropertyValueCollection());
}

Alright, so now we know that Get-ADComputer is using another class named IPUtil to get the IP address of a computer as it runs. Let's go look at IPUtil:

internal static string GetIPAddress(string dnsHostName, IPUtil.IPVersion ipVersion)
{
  if (string.IsNullOrEmpty(dnsHostName))
    return (string) null;
  try
  {
    foreach (IPAddress ipAddress in Dns.GetHostEntry(dnsHostName).AddressList)
    {
      if (ipAddress.AddressFamily == (AddressFamily) ipVersion && (ipVersion != IPUtil.IPVersion.IPv6 || !ipAddress.IsIPv6LinkLocal && !ipAddress.IsIPv6SiteLocal))
        return ipAddress.ToString();
    }
    return (string) null;
  }
  catch (SocketException ex)
  {
    return (string) null;
  }
}

Ahh, there it is.  The ole' trusty, tried and true System.Net.Dns.GetHostEntry() method.  The cmdlet is running that code every time you look up a computer object.  Also notice that the method returns on the first valid IP address that it finds, so we know that this cmdlet isn't going to work very well for computers with multiple IP addresses.  It would have been trivial to make the cmdlet return an array of all valid IP addresses instead, but alas, the Powershell developers did not think that was necessary.  And of course if the DNS query fails for any reason, you simply end up with a null for the IPv4Address field.

I've noticed that Microsoft's Active Directory cmdlets have many little "value-added" attributes baked into their cmdlets, but sometimes they can cause confusion, because you aren't sure where the data is coming from, or the "friendly" name that Powershell ascribes to an attribute doesn't match the attribute's name in Active Directory, etc.

Windows Emergency Management Services

by Ryan 12. November 2013 20:37

BSODToday we're going to talk about one of the more esoteric features of Windows.  A feature that even some seasoned sysadmins don't know about, and that almost nobody outside of kernel debuggers and device driver writers in Redmond ever use...

Emergency Management Services!

Imagine you have a Windows computer that has suffered a blue screen of death. If you want to sound more savvy, you might call it a STOP error or a bug check. Pictured is a very old example of a BSoD, but it's just so much more iconic than the pretty new Win8 one with the giant frowny face on it.

So you're sitting there staring at a blue screen on the computer's console... can you still reboot the machine gracefully?  Or even crazier, could you still run, for example, Powershell scripts on this machine even after it has suffered some massive hardware failure?

Don't reach for that power button just yet, because yes you can!

You might have thought that once a Windows computer has blue-screened, then it's done. It's stopped forever and it cannot execute any more code, period.  I thought that myself for a long time. But lo and behold, there's still a little juice left even after you've blue-screened, and all you need is a serial or USB cable.  That's where Emergency Management Services comes in.

As the name implies, EMS is typically there for when all else fails. For when your computer has already gone to hell in a handbasket. You could consider it an out-of-band management solution.

Of course you need to have already enabled it beforehand, not after a bug check has already occurred. You'd enable it on Vista/2008 and above like so:

Bcdedit.exe /EMS ON /EMSSETTINGS BIOS

If using a USB port, or

Bcdedit.exe /EMS ON /EMSSETTINGS EMSPORT:COM2 EMSBAUDRATE:9600

If using an RS-232 serial port. (How quaint.)

Now that it's enabled, you can connect to the Special Administration Console (SAC.)

SAC Special Administration Console

From here, you can launch a command prompt (Cmd.exe,) and from there, you can launch Powershell.exe!  All over a serial or USB cable connection. If the regular SAC mode cannot be entered for some reason, then EMS will put you in !SAC mode, where you can still at least read the event logs and reboot the server in a more graceful manner than just pulling the plug.

Mark Russinovich has this to say about the Windows boot up process as it concerns EMS:

"At this point, InitBootProcessor enumerates the boot-start drivers that were loaded by Winload and calls DbgLoadImageSymbols to inform the kernel debugger (if attached) to load symbols for each of these drivers. If the host debugger has configured the break on symbol load option, this will be the earliest point for a kernel debugger to gain control of the system. InitBootProcessor now calls HvlInit System, which attempts to connect to the hypervisor in case Windows might be running inside a Hyper-V host system’s child partition. When the function returns, it calls HeadlessInit to initialize the serial console if the machine was configured for Emergency Management Services (EMS)."
Mark Russinovich, David Solomon, Alex Ionescu, Windows Internals 6th Ed.

So there you have it. Even when faced with a BSoD, if you have an opportunity to shut down or reboot the machine in a more graceful manner than just pulling the electricity from it, then you should do it.

More Windows and AD Cryptography Mumbo-Jumbo

by Ryan 6. November 2013 09:41

I've still had my head pretty deep into cryptography and hashing as far as Windows and Active Directory is concerned, and I figured it was worth putting here in case you're interested.  We're going to talk about things like NTLM and how Windows stores hashes, and more.

The term NTLM is a loaded one, as the acronym is often used to refer to several different things.  It not only refers to Microsoft’s implementation of another standard algorithm for creating hashes, but it also refers to a network protocol.  The NTLM used for storing password hashes on disk (aka NT hash) is a totally different thing than the NTLM used to transmit authentication data across a TCP/IP network.  There’s the original LAN Manager protocol, which is worse than NT LAN Manager (NTLM or NTLMv1,) which is worse than NTLMv2, which is worse than NTLMv2 with Session Security and so on…  but an NT hash is an NT hash is an NT hash.  When we refer to either NTLMv1 or NTLMv2 specifically, we’re not talking about how the password gets stored on disk, we’re talking about network protocols.

Also, this information refers to Vista/2008+ era stuff.  I don’t want to delve into the ancient history Windows NT 4 or 2000, so let’s not even discuss LAN Manager/LM.  LM hashes are never stored or transmitted, ever, in an environment that consists of Vista/2008+ stuff.  It’s extinct.

Unless some bonehead admin purposely turned it back on.  In which case, fire him/her.

Moving on…

LOCAL MACHINE STUFF

So here we talk about what goes on in a regular Windows machine with no network connection.  No domain.  All local stuff. No network.

"SO REALLY, WTF IS AN NT HASH!?"

You might ask.  An NT hash is simply the MD4 hash of the little endian UTF-16 encoded plaintext input.  So really it’s MD4.

"So the Windows Security Accounts Manager (SAM) stores MD4 hashes of passwords to disk, then?"

Well no, not directly.  The NT hashes, before being stored, are encrypted with RC4 using the machine’s "boot key," which is both hard to get at as well as being unique to each OS install.  By "hard to get at," I mean that the boot key is scattered across several areas of the registry that require system level access and you have to know how to read data out of the registry that cannot be seen in Regedit, even if you run it as Local System. And it must be de-obfuscated on top of that.  After hashing the boot key and other bits of data with MD5, you then create another RC4 key from the MD5 hash of the hashed bootkey, plus some more information such as the specific user's security identifier.

So the final answer is that Windows stores local SAM passwords to disk in the registry as RC4-encrypted MD4 hashes using a key that is unique to every machine and is difficult to extract and descramble, unless you happen to be using one of the dozen or so tools that people have written to automate the process.

Active Directory is a different matter altogether.  Active Directory does not store domain user passwords in a local SAM database the same way that a standalone Windows machine stores local user passwords.  Active Directory stores those password hashes in a file on disk named NTDS.dit.  The only password hash that should be stored in the local SAM of a domain controller is the Directory Services Restore Mode password.  The algorithms used to save passwords in NTDS.dit are much different than the algorithms used by standalone Windows machines to store local passwords.  Before I tell you what those algorithms are, I want to mention that the algorithms AD uses to store domain password hashes on disk should not be in scope to auditors, because NTDS.dit is not accessible by unauthorized users.  The operating system maintains an exclusive lock on it and you cannot access it as long as the operating system is running.  Because of that,  the online Directory Services Engine and NTDS.dit together should be treated as one self-contained ‘cryptographic module’ and as such falls under the FIPS 140-2 clause:

"Cryptographic keys stored within a cryptographic module shall be stored either in plaintext form or encrypted form.  Plaintext secret and private keys shall not be accessible from outside the cryptographic module to unauthorized operators…"

So even plaintext secrets are acceptable to FIPS 140, as long as they stay within the cryptographic module and cannot be accessed by or sent to outsiders.

Active Directory stores not only the hashed password of domain users, but also their password history.  This is useful for that “Remember the last 24 passwords” Group Policy setting.  So there are encrypted NT hashes stored in NTDS.dit.  Let’s just assume we have an offline NTDS.dit – again, this should not be of any concern to auditors – this is Microsoft proprietary information and was obtained through reverse engineering.  It’s only internal to AD.  FIPS should not be concerned with this because this all takes place "within the cryptographic module."  Access to offline copies of NTDS.dit should be governed by how you protect your backups.

To decrypt a hash in NTDS.dit, first you need to decrypt the Password Encryption Key (PEK) which is itself encrypted and stored in NTDS.dit.  The PEK is the same across all domain controllers, but it is encrypted using the boot key (yes the same one discussed earlier) which is unique on every domain controller.  So once you have recovered the bootkey of a domain controller (which probably means you have already completely owned that domain controller and thus the entire domain so I'm not sure why you'd even be doing this) you can decrypt the PEK contained inside of an offline copy of NTDS.dit that came from that same domain controller.  To do that, you hash the bootkey 1000 times with MD5 and then use that result as the key to the RC4 cipher.  The only point to a thousand rounds of hashing is to make a brute force attack more time consuming.

OK, so now you’ve decrypted the PEK.  So use that decrypted PEK, plus 16 bytes of the encrypted hash itself as key material for another round of RC4.  Finally, use the SID of the user whose hash you are trying to decrypt as the key to a final round of DES to uncover, at last, the NT (MD4) hash for that user.

Now you need to brute-force attack that hash.  Using the program ighashgpu.exe, which uses CUDA to enable all 1344 processing cores on my GeForce GTX 670 graphics card to make brute force attempts on one hash in parallel, I can perform over 4 billion attempts per second to eventually arrive at the original plaintext password of the user.  It doesn’t take long to crack an NT hash any more.

As a side-note, so-called "cached credentials" are actually nothing more than password verifiers.  They’re essentially a hash of a hash, and there is no reversible information contained in a "cached credential" or any information that is of any interest to an attacker.  "Cached credentials" pose no security concern, yet most security firms, out of ignorance, still insist that they be disabled.

So there you have it.  You might notice that nowhere in Windows local password storage or Active Directory password storage was the acronym SHA ever used.  There is no SHA usage anywhere in the above processes, at all. 

 

NETWORK STUFF

Now passing authentication material across the network is an entirely different situation!

I’ll start with the bad news.  Remember earlier when I talked about brute-forcing the decrypted NT hash of another user?  Well that last step is often not even necessary.  NT hashes are password-equivalent, meaning that if I give Windows your hash, it’s as good as giving Windows your password in certain scenarios.  I don’t even need to know what your actual password is.  This is the pass-the-hash attack that you might have heard of.  But it’s not as bad as it sounds.

The good news is that neither Windows nor Active Directory ever sends your bare NT hash over the wire during network transmission.  And you cannot begin a pass-the-hash attack until you’ve already taken over administrative control of some domain joined machine.  That means there is no such thing as using pass-the-hash to own an entire networked AD environment just from an anonymous observer sniffing network packets.  That’s not how it works.  (Fun video on PtH)

Now that we know what an NT hash is, it’s a good time to draw the distinction that whenever we talk about specifically NTLMv1 and NTLMv2, we’re not actually talking about NT hashes anymore.  We’re talking about network communication protocols.  The whole mess is often just called NTLM as a blanket term because it’s all implemented by Microsoft products and it’s all interrelated.

Both NTLMv1 and NTLMv2 are challenge-response protocols, where Client and Server challenge and respond with each other such that Server can prove that Client knows what his/her password is, without ever actually sending the password or its hash directly over the network.  This is because Server already knows Client’s password (hash), either because it’s stored in the local SAM in the case of local Windows accounts, or because Server can forward Client’s data to a domain controller, and the domain controller can verify and respond to Server with “Yep, that’s his password, alright.”

With NTLMv1, you’ll see some usage of DES during network communication. 

With NTLMv2, you’ll see some usage of HMAC-MD5.

There’s also NTLM2 Session, aka NTLMv2 With Session Security, but it uses the same encryption and hashing algorithms as NTLMv2.

It is possible to completely remove the usage of NTLM network protocols from an Active Directory domain and go pure Kerberos, but it will break many applications.  Here is a fantastic article written by one of my favorite Microsoft employees about doing just that.

So let’s assume that hypothetically we blocked all usage of NTLM network protocols and went pure Kerberos. Kerberos in AD supports only the following encryption:

DES_CBC_CRC    (Disabled by default as of Win7/2008R2)[Source]
DES_CBC_MD5    (Disabled by default as of Win7/2008R2)
RC4_HMAC_MD5   (Disabled by default as of Win7/2008R2)
AES256-CTS-HMAC-SHA1-96
AES128-CTS-HMAC-SHA1-96
RC4-HMAC
Future encryption types

 

Of course, there are plenty of other Windows applications that pass authentication traffic over the network besides just AD.  Remote Desktop is a great example.  Remote Desktop traditionally uses RC4, but modern versions of Remote Desktop will negotiate a Transport Layer Security (TLS) connection wherever possible.  (Also known as Network Level Authentication (NLA).)   This is great news because this TLS connection uses the computer’s digital certificate, and that certificate can be automatically created and assigned to the computer by an Enterprise Certificate Authority, and that certificate can be capable of SHA256, SHA384, etc.  However the Certificate Authority administrator defines it.

If you turn on FIPS mode, Remote Desktop can only use TLS 1.0 (as opposed to SSL) when NLA is negotiated, and it can only use 3DES_CBC instead of RC4 when TLS is not negotiated.

Other ciphers that are turned off when FIPS mode is turned on include:

-          TLS_RSA_WITH_RC4_128_SHA

-          TLS_RSA_WITH_RC4_128_MD5

-          SSL_CK_RC4_128_WITH_MD5

-          SSL_CK_DES_192_EDE3_CBC_WITH_MD5

-          TLS_RSA_WITH_NULL_MD5

-          TLS_RSA_WITH_NULL_SHA

 

That will apply to all applications running on Windows that rely upon Schannel.dll.  The application will crash if it calls upon one of the above ciphers when FIPS mode is enabled.

So anyway, that’s about all I got right now.  If you made it to the bottom of  this post I should probably buy you a drink!

 

FIPS 140

by Ryan 29. October 2013 21:23

FIPS 140-2 Logo

Oh yeah, I have a blog! I almost forgot.  I've been busy working.  Let's talk about an extraordinarily fascinating topic: Federal compliance!

FIPS (Federal Information Processing Standard) has many different standards.  FIPS holds sway mainly in the U.S. and Canada.  Within each standard, there are multiple revisions and multiple levels of classification.  FIPS 140 is about encryption and hashing algorithms.  It’s about accrediting cryptographic modules.  Here’s an example of a certificate.  The FIPS 140-2 revision is the current standard, and FIPS 140-3 is under development with no announced release date yet.  It does not matter if your homebrew cryptography is technically “better” than anything else ever.  If your cryptographic module has not gone through the code submission and certification process, then it is not FIPS-approved.  You have to submit your source code/device/module to the government, in order to gain FIPS approval.  Even if you have the most amazing cryptography the world has ever seen, it is still not FIPS approved or compliant until it goes through the process.  In fact, the government is free to certify weaker algorithms in favor of stronger ones just because the weaker algorithms have undergone the certification process when the stronger ones have not, and they have historically done so.  (Triple-DES being the prime example.)

There is even a welcome kit, with stickers.  You need to put these tamper-proof stickers on your stuff for certain levels of FIPS compliance.

So if you are ever writing any software of your own, please do not try to roll your own cryptography. Use the approved libraries that have already gone through certification. Your custom crypto has about a 100% chance of being worse than AES/SHA (NSA backdoors notwithstanding,) and it will never be certifiable for use in a secure Federal environment anyway.  Also avoid things like re-hashing your hash with another hashing algorithm in attempt to be ‘clever’ – doing so can ironically make your hash weaker.

And the Feds are picky.  For instance, if programming for Windows in .NET, the use of System.Security.Cryptography.SHA1 classes may be acceptable while the use of System.Security.Cryptography.SHA1Managed classes are not acceptable.  It doesn’t mean the methods in the SHA1Managed classes are any worse, it simply means Microsoft has not submitted them for approval. 

Many major vendors such as Microsoft and Cisco go through this process for every new version of product that they release.  It costs money and time to get your product FIPS-certified.  Maybe it’s a Cisco ASA appliance, or maybe it’s a simple Windows DLL. 

The most recent publication of FIPS 140-2 Annex A lists approved security functions (algorithms.)  It lists AES and SHA-1 as acceptable, among others. So if your application uses only approved implementations of AES and SHA-1 algorithms, then that application should be acceptable according to FIPS 140-2.  If your application uses an MD5 hashing algorithm during communication, that product is NOT acceptable for use in an environment where FIPS compliance must be maintained. 

However, there is also this contradictory quote from NIST:

“The U.S. National Institute of Standards and Technology says, "Federal agencies should stop using SHA-1 for...applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010" [22]”

So it seems to me that there are contradictory government statements regarding the usage of security functions.  The most recent draft of FIPS 140-2 Annex A clearly lists SHA-1 as an acceptable hashing algorithm, yet, the quote from NIST says that government agencies must use only SHA-2 after 2010.  Not sure what the answer is to that. 

These algorithms can be broken up into two categories: encryption algorithms and hashing algorithms.  An example of a FIPS encryption algorithm is AES (which consists of three members of the Rijndael family of ciphers, adopted in 2001, and has a much cooler name.)  Encryption algorithms can be reversed/decrypted, that is, converted back into their original form from before they were encrypted.

Hashing algorithms on the other hand, are also known as one-way functions.  They are mathematically one-way and cannot be reversed.  Once you hash something, you cannot “un-hash” it, no matter how much computing power you have.  Hashing algorithms take any amount of data, of an arbitrary size, and mathematically map it to a “hash” of fixed length.  For instance, the SHA-256 algorithm will map any chunk of data, whether it be 10 bytes or 2 gigabytes, into a 256 bit hash.  Always 256 bit output, no matter the size of the input.

This is why the hash of a password is generally considered decently secure, because there is NO way to reverse the hash, so you can pass that hash to someone else via insecure means (e.g. over a network connection,) and if the other person knows what your password should be, then they can know that the hash you gave them proves that you know the actual password.  That's a bit of a simplification, but it gets the point across.

If you were trying to attack a hash, all you can do, if you know what hash algorithm was used, is to keep feeding that same hash algorithm new inputs, maybe millions or billions of new inputs a second, and hope that maybe you can reproduce the same hash.  If you can reproduce the same hash, then you know your input was the same as the original ‘plaintext’ that you were trying to figure out.  Maybe it was somebody’s password.  This is the essence of a ‘brute-force’ attack against a password hash.

Logically, if all inputs regardless of size, are mapped to a fixed size, then it stands to reason that there must be multiple sets of data that, when hashed, result in the same hash.  These are known as hash collisions.  They are very rare, but they are very bad, and collisions are the reason we needed to migrate away from the MD5 hashing algorithm, and we will eventually need to migrate away from the SHA-1 hashing algorithm.  (No collisions have been found in SHA-1 yet that I know of.)  Imagine if I could create a fake SSL certificate that, when I creatively flipped a few bits here and there, resulted in the same hash as a popular globally trusted certificate!  That would be very bad.

Also worth noting is that SHA-2 is an umbrella term, that includes SHA256, SHA384, SHA512, etc.

FIPS 140 is only concerned with algorithms used for external communication.  Any communication outside of the application or module, whether that be network communication, or communication to another application on the same system, etc.  FIPS 140 is not concerned with algorithms used to handle data within the application itself, within its own private memory, that never leaves the application and cannot be accessed by unauthorized users.  Here is an excerpt from the 140-2 standard to back up my claim:

“Cryptographic keys stored within a cryptographic module shall be stored either in plaintext form or encrypted form. Plaintext secret and private keys shall not be accessible from outside the cryptographic module to unauthorized operators…”

Let’s use Active Directory as an example.  This is why, when someone gets concerned about what algorithms AD uses internally, you should refer them to the above paragraph and tell them not to worry about it.  Even if it were plaintext (it’s not, but even if hypothetically it were,) it isn’t in scope for FIPS because it is internal only to the application.  When Active Directory and its domain members are operated in FIPS mode, connections made via Schannel.dll, Remote Desktop, etc., will only use FIPS compliant algorithms. If you had applications before that make calls to non-FIPS crypto libraries, those applications will now crash.

Another loophole that has appeared to satisfy FIPS requirements in the past, is wrapping a weaker algorithm inside of a stronger one.  For instance, a classic implementation of the RADIUS protocol utilizes the MD5 hashing algorithm during network communications.  MD5 is a big no-no.  However, see this excerpt from Cisco:

“RADIUS keywrap support is an extension of the RADIUS protocol. It provides a FIPS-certifiable means for the Cisco Access Control Server (ACS) to authenticate RADIUS messages and distribute session keys.”

So by simply wrapping weaker RADIUS keys inside of AES, it becomes FIPS-certifiable once again.  It would seem to follow that this logic also applies when using TLS and IPsec, as they are able to use very strong algorithms (such as SHA-2) that most applications do not natively support.

So with all that said, if you need the highest levels of network security, you need 802.1x and IPsec if you need to protect all those applications that can't protect themselves.

Bare Minimum Required to Promote a Domain Controller Into a Domain

by Ryan 13. October 2013 13:17

Hiya,

This is something I meant to blog about months ago, but for some reason I let it slip my mind. It just came up again in a conversation I had yesterday, and I couldn't believe I forgot to post it here. (It also may or may not be similar to a test question that someone might encounter if he or she were taking some Microsoft-centric certification tests.)

It started when someone on ServerFault asked the question, "Do you need a GC online to DCPROMO?"

Well the short answer to that question is that no, you don't need a global catalog online (or reachable) from the computer you are trying to simply promote into a domain controller. But that got me thinking, I'd like to go a step farther and see for myself what the bare minimum requirements for promoting a computer to a domain controller in an existing domain, especially concerning the accessibility of certain FSMO roles from the new DC. I don't care about anything else right now (such as how useful this DC might be after it's promoted) except for just successfully completing the DCPromo process.

On one hand, this might seem like just a silly theoretical exercise, but on the other hand, you just might want to have this knowledge if you ever work in a large enterprise environment where your network is not fully routed, and all DCs are not fully meshed. You might need to create a domain controller in a segment of the network where it has network connectivity to some other DCs, but not all of them.

Well I have a fine lab handy, so let's get this show on the road.

  1. Create three computers.
  2. Make two of them DCs for the same single-domain forest (of the 2008+ variety.)
  3. Make only one of them a global catalog.
  4. Leave all FSMOs on the first domain controller, for now.

So when you promote a writable domain controller, you need two things: another writable domain controller online from which to replicate the directory, and your first RID pool allocation directly from the RID pool FSMO role holder. When you promote an RODC, you don't even need the RIDs, since RODCs don't create objects or outbound replicate.  If the computer cannot reach the RID pool master, as in direct RPC connectivity, DCPROMO will give you this message:

You will not be able to install a writable replica domain controller at this time because the RID master DC1.domain.com is offline.

But you can still create an RODC, as long as the domain controller with whom you can communicate is not also an RODC - it has to be a RWDC.

So the final steps to prove this theory are:

  1. Transfer only the RID master to the second domain controller.
  2. Power down the first domain controller.

At this point, only the RID pool master is online, and no global catalog is online. Now run DCPromo on your third computer. Can you successfully promote the new domain controller as a RWDC?

Yes you can.

Now, you'll encounter some other problems down the road, such as the new DC not being able to process password changes because it cannot contact the PDCe, but you've successfully added a new domain controller to the domain nonetheless.

Site Upgrade

by Ryan 7. October 2013 20:47

Upgraded this site from Blogengine.NET 2.5 to 2.8 this evening. This post is basically a test just to see if anything is broken from the upgrade. Sorry for the inconvenience.

 

Testing quotation.

Edit: Ugh, looks like the code formatter's broken. :(

Edit: Put the old SynatxHighlighter back in. It's not awesome, but it's better than nothing.

Edit: Well, one positive thing that came out of this is that I vastly improved Alex Gorbatchev's old SyntaxHighLighter for this blog. I just modernized the Powershell brush to include all the new Cmdlets and aliases, since the brush had not been updated since around 2009. Just let me know if you want it.

Locating Active Directory Site Options with Powershell

by Ryan 2. October 2013 16:30

So as you may know, I hang out on ServerFault a lot.  And last night, one of my favorite ServerFault members, Mark Marra, asked an interesting question there that sent me on a long journey of research in order to answer it.

(Mark's got his own IT blog by the way which you should totally check out. He's a world class Active Directory guy, the kind of guy that doesn't usually ask easy questions, so I'm proud of myself whenever I'm able to answer one of his questions.)

The link to the actual question and answer on ServerFault is here, most of which I am about to repeat in this post, but I'll see if I can embellish a little here on this blog.

Question:

How can I use PowerShell to find AD site options like +IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED in PowerShell? I've been playing around with the following command, but can't get it to spit out anything useful.

Get-ADObject -Filter 'objectClass -eq "site"' `
-Searchbase (Get-ADRootDSE).ConfigurationNamingContext `
-Properties options

Answer:

The above command is very close, but off by just a hair. The short and simple answer is that the above command is searching for the options attribute of the Site object itself, when we actually need to be looking at the NTDS Site Settings object belonging to that site. And furthermore, there is no Powershell-Friendly way of getting this data as of yet, i.e., there is no simple Get-ADSiteOptions Cmdlet... but we may just have the cure for that if you can get through the rest of this blog post. We just need to figure out where exactly to find the data, and then we can use Powershell to grab it, wherever it may be hiding.

Take the following two commands: 

repadmin commands

Repadmin /options <DC> gives us the DSA options that are specific to the domain controller being queried, such as whether the domain controller is a global catalog or not, and the Repadmin /siteoptions <DC> command gives us the options applied to the Active Directory Site to which the domain controller being queried belongs (or you can specify that you want to know the settings for another site with the /site:California parameter. Full repadmin syntax here, or just use the /experthelp parameter.)

Note that these settings are relatively advanced settings in AD, so you may not work with them on a regular basis. Sites by default have no options defined, so if you find yourself working with these options, chances are you have a more complex AD replication structure on your hands than the average Joe. If all you have are a few sites that are fully bridged/meshed, all with plenty of bandwidth, then you probably have no need to modify these settings. More importantly, if you modify any of these settings, it's very important that you document your changes, so that future administrators will know what you've done to the domain.

So where does repadmin.exe get this information?

The settings for individual domain controllers come from here: 

ADSI Edit 1

That is, the options attribute of the NTDS Settings object for each domain controller.

The site options come from the NTDS Site Settings object for each site. (Not the site object itself: ) 

Site Options

Here is the basic MSDN documentation on the Options attribute:

A bitfield, where the meaning of the bits varies from objectClass to objectClass. Can occur on Inter-Site-Transport, NTDS-Connection, NTDS-DSA, NTDS-Site-Settings, and Site-Link objects.

Now we know exactly which bits repadmin.exe works on when we issue a command such as repadmin /options DC01 +IS_GC or repadmin /siteoptions DC01 /site:Arlington +IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED. Fortunately, repadmin.exe as well as the ADSI Edit MMC snap-in both have bitmask translators in their code, so that they can show us the friendly names of the value of the options attribute, instead of just a 32-bit hexadecimal code.

If we want to roll our own Get-ADSiteOptions Cmdlet, we'll have to build our own bitmask translator too.

Fortunately the bitfields for both the DC settings and the Site settings are documented, here and here. Here is an excerpt for the Site options bitmask: 

Site Options Bitmask

So now we have enough information to start working on our Get-ADSiteOptions Cmdlet. Let's start with this basic snippet of Powershell:

ForEach($Site In (Get-ADObject -Filter 'objectClass -eq "site"' -Searchbase (Get-ADRootDSE).ConfigurationNamingContext)) 
{ 
    Get-ADObject "CN=NTDS Site Settings,$($Site.DistinguishedName)" -Properties Options 
}

What that does is get the DistinguishedName of every Site in the forest, iterate through them and get the attributes of each Site's NTDS Site Settings object. If the options attribute has not been set for a Site (which remember, is the default,) then it will not be shown. Only Sites with modified options will show as having an options attribute at all. Furthermore, in Powershell, it will come out looking like this:

Powershell site options

It's in decimal. 16 in decimal is 0x10 in hex, which we now know means IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED.

So, without further ado, let's see if we can build our own Get-ADSiteOptions Cmdlet:

#Require -Version 3
#Require -Module ActiveDirectory
Function Get-ADSiteOptions
{
<#
.SYNOPSIS
    This Cmdlet gets Active Directory Site Options.
.DESCRIPTION
    This Cmdlet gets Active Directory Site Options.
    We can fill out the rest of this comment-based help later.
.LINK
    http://myotherpcisacloud.com
.NOTES
    Written by Ryan Ries, October 2013. ryanries09@gmail.com.
#>
    [CmdletBinding()]
    Param()
    BEGIN
    {
        Set-StrictMode -Version Latest

        # This enum comes from NtDsAPI.h in the Windows SDK.
        # Also thanks to Jason Scott for pointing it out to me. http://serverfault.com/users/23067/jscott
        Add-Type -TypeDefinition @" 
                                   [System.Flags]
                                   public enum nTDSSiteSettingsFlags {
                                   NTDSSETTINGS_OPT_IS_AUTO_TOPOLOGY_DISABLED            = 0x00000001,
                                   NTDSSETTINGS_OPT_IS_TOPL_CLEANUP_DISABLED             = 0x00000002,
                                   NTDSSETTINGS_OPT_IS_TOPL_MIN_HOPS_DISABLED            = 0x00000004,
                                   NTDSSETTINGS_OPT_IS_TOPL_DETECT_STALE_DISABLED        = 0x00000008,
                                   NTDSSETTINGS_OPT_IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED = 0x00000010,
                                   NTDSSETTINGS_OPT_IS_GROUP_CACHING_ENABLED             = 0x00000020,
                                   NTDSSETTINGS_OPT_FORCE_KCC_WHISTLER_BEHAVIOR          = 0x00000040,
                                   NTDSSETTINGS_OPT_FORCE_KCC_W2K_ELECTION               = 0x00000080,
                                   NTDSSETTINGS_OPT_IS_RAND_BH_SELECTION_DISABLED        = 0x00000100,
                                   NTDSSETTINGS_OPT_IS_SCHEDULE_HASHING_ENABLED          = 0x00000200,
                                   NTDSSETTINGS_OPT_IS_REDUNDANT_SERVER_TOPOLOGY_ENABLED = 0x00000400  }
"@
        ForEach($Site In (Get-ADObject -Filter 'objectClass -eq "site"' -Searchbase (Get-ADRootDSE).ConfigurationNamingContext)) 
        {            
            $SiteSettings = Get-ADObject "CN=NTDS Site Settings,$($Site.DistinguishedName)" -Properties Options
            If(!$SiteSettings.PSObject.Properties.Match('Options').Count -OR $SiteSettings.Options -EQ 0)
            {
                # I went with '(none)' here to give it a more classic repadmin.exe feel.
                # You could also go with $Null, or omit the property altogether for a more modern, Powershell feel.
                [PSCustomObject]@{SiteName=$Site.Name; DistinguishedName=$Site.DistinguishedName; SiteOptions='(none)'} 
            }
            Else
            {
                [PSCustomObject]@{SiteName=$Site.Name; DistinguishedName=$Site.DistinguishedName; SiteOptions=[Enum]::Parse('nTDSSiteSettingsFlags', $SiteSettings.Options)}
            }
        }
    }
}

And finally, a screenshot of the fruits of our labor - what we set out to do, which was to view AD Site options in Powershell:

Mind Your Powershell Efficiency Optimizations

by Ryan 22. September 2013 11:28

A lazy Sunday morning post!

As most would agree, Powershell is the most powerful Windows administration tool ever seen. In my opinion, you cannot continue to be a Windows admin without learning it. However, Powershell is not breaking any speed records. In fact it can be downright slow. (After all, it's called Power-shell, not Speed-shell.)

So, as developers or sysadmins or devopsapotami or anyone else who writes Powershell, I implore you to not further sully Powershell's reputation for being slow by taking the time to benchmark and optimize your script/code.

Let's look at an example.

$Numbers = @()
Measure-Command { (0 .. 9999) | ForEach-Object { $Numbers += Get-Random } }

I'm simply creating an array (of indeterminate size) and proceeding to fill it with 10,000 random numbers.  Notice the use of Measure-Command { }, which is what you want to use for seeing exactly how long things take to execute in Powershell.  The above procedure took 21.3 seconds.

So let's swap in a strongly-typed array and do the exact same thing:

[Int[]]$Numbers = New-Object Int[] 10000
Measure-Command { (0 .. 9999) | ForEach-Object { $Numbers[$_] = Get-Random } }

We can produce the exact same result, that is, a 10,000-element array full of random integers, in 0.47 seconds.

That's an approximate 45x speed improvement.

We called the Get-Random Cmdlet 10,000 times in both examples, so that is probably not our bottleneck. Using [Int[]]$Numbers = @() doesn't help either, so I don't think it's the boxing and unboxing overhead that you'd see with an ArrayList. Instead, it seems most likely that the dramatic performance difference was in using an array of fixed size, which eliminates the need to resize the array 10,000 times.

Once you've got your script working, then you should think about optimizing it. Use Measure-Command to see how long specific pieces of your script take. Powershell, and all of .NET to a larger extent, gives you a ton of flexibility in how you write your code. There is almost never just one way to accomplish something. However, with that flexibility, comes the responsibility of finding the best possible way.

Server Core Page File Management with Powershell

by Ryan 19. September 2013 19:19

A quickie for tonight.

(Heh heh...)

Microsoft is really pushing to make this command line-only, Server Core and Powershell thing happen. No more GUI. Everything needs to get done on the command line. Wooo command line. Love it.

So... how the heck do you set the size and location of the paging file(s) without a GUI? Could you do it without Googling Binging? Right now?

You will be able to if you remember this:

$PageFileSizeMB = [Math]::Truncate(((Get-WmiObject Win32_ComputerSystem).TotalPhysicalMemory + 200MB) / 1MB)
Set-CimInstance -Query "SELECT * FROM Win32_ComputerSystem" -Property @{AutomaticManagedPagefile="False"}
Set-CimInstance -Query "SELECT * FROM Win32_PageFileSetting" -Property @{InitialSize=$PageFileSizeMB; MaximumSize=$PageFileSizeMB}

The idea here is that I'm first turning off automatic page file management, and then I am setting the size of the page file manually, to be static and to be equal to the size of my RAM, plus a little extra. If you want full memory dumps in case your server crashes, you need a page file that is the size of your physical RAM plus a few extra MB.

You could have also done this with wmic.exe without using Powershell, but when given a choice between Powershell and not-Powershell, I pretty much always go Powershell.

Did I mention Powershell?

About Me

Name: Ryan Ries
Location: Texas, USA
Occupation: Systems Engineer 

I am a Windows engineer and Microsoft advocate, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST


MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.