Verifying RPC Network Connectivity Like A Boss

by Ryan 16. February 2014 10:02

Aloha.  I did some fun Powershelling yesterday and now it's time to share.

If you work in an IT environment that's of any significant size, chances are you have firewalls.  Maybe lots and lots of firewalls. RPC can be a particularly difficult network protocol to work with when it comes to making sure all the ports necessary for its operation are open on your firewalls. I've found that firewall guys sometimes have a hard time allowing the application guy's RPC traffic through their firewalls because of its dynamic nature. Sometimes the application guys don't really know how RPC works, so they don't really know what to ask of the firewall guys.  And to make it even worse, RPC errors can be hard to diagnose.  For instance, the classic RPC error 1722 (0x6BA) - "The RPC server is unavailable" sounds like a network problem at first, but can actually mean access denied, or DNS resolution failure, etc.

MSRPC, or Microsoft Remote Procedure Call, is Microsoft's implementation of DCE (Distributed Computing Environment) RPC. It's been around a long time and is pervasive in an environment containing Windows computers. Tons of Windows applications and components depend on it.

A very brief summary of how the protocol works: There is an "endpoint mapper" that runs on TCP port 135. You can bind to that port on a remote computer anonymously and enumerate all the various RPC services available on that computer.  The services may be using named pipes or TCP/IP.  Named pipes will use port 445.  The services that are using TCP are each dynamically allocated their own TCP ports, which are drawn from a pool of port numbers. This pool of port numbers is by default 1024-5000 on XP/2003 and below, and 49152-65535 on Vista/2008 and above. (The ephemeral port range.) You can customize that port range that RPC will use if you wish, like so:

reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v Ports /t REG_MULTI_SZ /f /d 8000-9000
reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v PortsInternetAvailable /t REG_SZ /f /d Y
reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v UseInternetPorts /t REG_SZ /f /d Y

And/Or

netsh int ipv4 set dynamicport tcp start=8000 num=1001
netsh int ipv4 set dynamicport udp start=8000 num=1001
netsh int ipv6 set dynamicport tcp start=8000 num=1001
netsh int ipv6 set dynamicport udp start=8000 num=1001

This is why we have to query the endpoint mapper first, because we can't just guess exactly which port we need to connect to for a particular service.

So, I wrote a little something in Powershell that will test the network connectivity of a remote machine for RPC, by querying the endpoint mapper, and then querying each port that the endpoint mapper tells me that it's currently using.


#Requires -Version 3
Function Test-RPC
{
    [CmdletBinding(SupportsShouldProcess=$True)]
    Param([Parameter(ValueFromPipeline=$True)][String[]]$ComputerName = 'localhost')
    BEGIN
    {
        Set-StrictMode -Version Latest
        $PInvokeCode = @'
        using System;
        using System.Collections.Generic;
        using System.Runtime.InteropServices;

        public class Rpc
        {
            // I found this crud in RpcDce.h

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcBindingFromStringBinding(string StringBinding, out IntPtr Binding);

            [DllImport("Rpcrt4.dll")]
            public static extern int RpcBindingFree(ref IntPtr Binding);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcMgmtEpEltInqBegin(IntPtr EpBinding,
                                                    int InquiryType, // 0x00000000 = RPC_C_EP_ALL_ELTS
                                                    int IfId,
                                                    int VersOption,
                                                    string ObjectUuid,
                                                    out IntPtr InquiryContext);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcMgmtEpEltInqNext(IntPtr InquiryContext,
                                                    out RPC_IF_ID IfId,
                                                    out IntPtr Binding,
                                                    out Guid ObjectUuid,
                                                    out IntPtr Annotation);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcBindingToStringBinding(IntPtr Binding, out IntPtr StringBinding);

            public struct RPC_IF_ID
            {
                public Guid Uuid;
                public ushort VersMajor;
                public ushort VersMinor;
            }

            public static List QueryEPM(string host)
            {
                List ports = new List();
                int retCode = 0; // RPC_S_OK                
                IntPtr bindingHandle = IntPtr.Zero;
                IntPtr inquiryContext = IntPtr.Zero;                
                IntPtr elementBindingHandle = IntPtr.Zero;
                RPC_IF_ID elementIfId;
                Guid elementUuid;
                IntPtr elementAnnotation;

                try
                {                    
                    retCode = RpcBindingFromStringBinding("ncacn_ip_tcp:" + host, out bindingHandle);
                    if (retCode != 0)
                        throw new Exception("RpcBindingFromStringBinding: " + retCode);

                    retCode = RpcMgmtEpEltInqBegin(bindingHandle, 0, 0, 0, string.Empty, out inquiryContext);
                    if (retCode != 0)
                        throw new Exception("RpcMgmtEpEltInqBegin: " + retCode);
                    
                    do
                    {
                        IntPtr bindString = IntPtr.Zero;
                        retCode = RpcMgmtEpEltInqNext (inquiryContext, out elementIfId, out elementBindingHandle, out elementUuid, out elementAnnotation);
                        if (retCode != 0)
                            if (retCode == 1772)
                                break;

                        retCode = RpcBindingToStringBinding(elementBindingHandle, out bindString);
                        if (retCode != 0)
                            throw new Exception("RpcBindingToStringBinding: " + retCode);
                            
                        string s = Marshal.PtrToStringAuto(bindString).Trim().ToLower();
                        if(s.StartsWith("ncacn_ip_tcp:"))                        
                            ports.Add(int.Parse(s.Split('[')[1].Split(']')[0]));
                        
                        RpcBindingFree(ref elementBindingHandle);
                        
                    }
                    while (retCode != 1772); // RPC_X_NO_MORE_ENTRIES

                }
                catch(Exception ex)
                {
                    Console.WriteLine(ex);
                    return ports;
                }
                finally
                {
                    RpcBindingFree(ref bindingHandle);
                }
                
                return ports;
            }
        }
'@
    }
    PROCESS
    {
        ForEach($Computer In $ComputerName)
        {
            If($PSCmdlet.ShouldProcess($Computer))
            {
                [Bool]$EPMOpen = $False
                $Socket = New-Object Net.Sockets.TcpClient
                
                Try
                {                    
                    $Socket.Connect($Computer, 135)
                    If ($Socket.Connected)
                    {
                        $EPMOpen = $True
                    }
                    $Socket.Close()                    
                }
                Catch
                {
                    $Socket.Dispose()
                }
                
                If ($EPMOpen)
                {
                    Add-Type $PInvokeCode
                    $RPCPorts = [Rpc]::QueryEPM($Computer)
                    [Bool]$AllPortsOpen = $True
                    Foreach ($Port In $RPCPorts)
                    {
                        $Socket = New-Object Net.Sockets.TcpClient
                        Try
                        {
                            $Socket.Connect($Computer, $Port)
                            If (!$Socket.Connected)
                            {
                                $AllPortsOpen = $False
                            }
                            $Socket.Close()
                        }
                        Catch
                        {
                            $AllPortsOpen = $False
                            $Socket.Dispose()
                        }
                    }

                    [PSObject]@{'ComputerName' = $Computer; 'EndPointMapperOpen' = $EPMOpen; 'RPCPortsInUse' = $RPCPorts; 'AllRPCPortsOpen' = $AllPortsOpen}
                }
                Else
                {
                    [PSObject]@{'ComputerName' = $Computer; 'EndPointMapperOpen' = $EPMOpen}
                }
            }
        }
    }
    END
    {

    }
}

And the output will look a little something like this:


You can also query the endpoint mapper with PortQry.exe -n server01 -e 135, but I was curious about how it worked at a deeper level, so I ended up writing something myself. There weren't many examples of how to use that particular native API, so it was pretty tough.

More Windows and AD Cryptography Mumbo-Jumbo

by Ryan 6. November 2013 09:41

I've still had my head pretty deep into cryptography and hashing as far as Windows and Active Directory is concerned, and I figured it was worth putting here in case you're interested.  We're going to talk about things like NTLM and how Windows stores hashes, and more.

The term NTLM is a loaded one, as the acronym is often used to refer to several different things.  It not only refers to Microsoft’s implementation of another standard algorithm for creating hashes, but it also refers to a network protocol.  The NTLM used for storing password hashes on disk (aka NT hash) is a totally different thing than the NTLM used to transmit authentication data across a TCP/IP network.  There’s the original LAN Manager protocol, which is worse than NT LAN Manager (NTLM or NTLMv1,) which is worse than NTLMv2, which is worse than NTLMv2 with Session Security and so on…  but an NT hash is an NT hash is an NT hash.  When we refer to either NTLMv1 or NTLMv2 specifically, we’re not talking about how the password gets stored on disk, we’re talking about network protocols.

Also, this information refers to Vista/2008+ era stuff.  I don’t want to delve into the ancient history Windows NT 4 or 2000, so let’s not even discuss LAN Manager/LM.  LM hashes are never stored or transmitted, ever, in an environment that consists of Vista/2008+ stuff.  It’s extinct.

Unless some bonehead admin purposely turned it back on.  In which case, fire him/her.

Moving on…

LOCAL MACHINE STUFF

So here we talk about what goes on in a regular Windows machine with no network connection.  No domain.  All local stuff. No network.

"SO REALLY, WTF IS AN NT HASH!?"

You might ask.  An NT hash is simply the MD4 hash of the little endian UTF-16 encoded plaintext input.  So really it’s MD4.

"So the Windows Security Accounts Manager (SAM) stores MD4 hashes of passwords to disk, then?"

Well no, not directly.  The NT hashes, before being stored, are encrypted with RC4 using the machine’s "boot key," which is both hard to get at as well as being unique to each OS install.  By "hard to get at," I mean that the boot key is scattered across several areas of the registry that require system level access and you have to know how to read data out of the registry that cannot be seen in Regedit, even if you run it as Local System. And it must be de-obfuscated on top of that.  After hashing the boot key and other bits of data with MD5, you then create another RC4 key from the MD5 hash of the hashed bootkey, plus some more information such as the specific user's security identifier.

So the final answer is that Windows stores local SAM passwords to disk in the registry as RC4-encrypted MD4 hashes using a key that is unique to every machine and is difficult to extract and descramble, unless you happen to be using one of the dozen or so tools that people have written to automate the process.

Active Directory is a different matter altogether.  Active Directory does not store domain user passwords in a local SAM database the same way that a standalone Windows machine stores local user passwords.  Active Directory stores those password hashes in a file on disk named NTDS.dit.  The only password hash that should be stored in the local SAM of a domain controller is the Directory Services Restore Mode password.  The algorithms used to save passwords in NTDS.dit are much different than the algorithms used by standalone Windows machines to store local passwords.  Before I tell you what those algorithms are, I want to mention that the algorithms AD uses to store domain password hashes on disk should not be in scope to auditors, because NTDS.dit is not accessible by unauthorized users.  The operating system maintains an exclusive lock on it and you cannot access it as long as the operating system is running.  Because of that,  the online Directory Services Engine and NTDS.dit together should be treated as one self-contained ‘cryptographic module’ and as such falls under the FIPS 140-2 clause:

"Cryptographic keys stored within a cryptographic module shall be stored either in plaintext form or encrypted form.  Plaintext secret and private keys shall not be accessible from outside the cryptographic module to unauthorized operators…"

So even plaintext secrets are acceptable to FIPS 140, as long as they stay within the cryptographic module and cannot be accessed by or sent to outsiders.

Active Directory stores not only the hashed password of domain users, but also their password history.  This is useful for that “Remember the last 24 passwords” Group Policy setting.  So there are encrypted NT hashes stored in NTDS.dit.  Let’s just assume we have an offline NTDS.dit – again, this should not be of any concern to auditors – this is Microsoft proprietary information and was obtained through reverse engineering.  It’s only internal to AD.  FIPS should not be concerned with this because this all takes place "within the cryptographic module."  Access to offline copies of NTDS.dit should be governed by how you protect your backups.

To decrypt a hash in NTDS.dit, first you need to decrypt the Password Encryption Key (PEK) which is itself encrypted and stored in NTDS.dit.  The PEK is the same across all domain controllers, but it is encrypted using the boot key (yes the same one discussed earlier) which is unique on every domain controller.  So once you have recovered the bootkey of a domain controller (which probably means you have already completely owned that domain controller and thus the entire domain so I'm not sure why you'd even be doing this) you can decrypt the PEK contained inside of an offline copy of NTDS.dit that came from that same domain controller.  To do that, you hash the bootkey 1000 times with MD5 and then use that result as the key to the RC4 cipher.  The only point to a thousand rounds of hashing is to make a brute force attack more time consuming.

OK, so now you’ve decrypted the PEK.  So use that decrypted PEK, plus 16 bytes of the encrypted hash itself as key material for another round of RC4.  Finally, use the SID of the user whose hash you are trying to decrypt as the key to a final round of DES to uncover, at last, the NT (MD4) hash for that user.

Now you need to brute-force attack that hash.  Using the program ighashgpu.exe, which uses CUDA to enable all 1344 processing cores on my GeForce GTX 670 graphics card to make brute force attempts on one hash in parallel, I can perform over 4 billion attempts per second to eventually arrive at the original plaintext password of the user.  It doesn’t take long to crack an NT hash any more.

As a side-note, so-called "cached credentials" are actually nothing more than password verifiers.  They’re essentially a hash of a hash, and there is no reversible information contained in a "cached credential" or any information that is of any interest to an attacker.  "Cached credentials" pose no security concern, yet most security firms, out of ignorance, still insist that they be disabled.

So there you have it.  You might notice that nowhere in Windows local password storage or Active Directory password storage was the acronym SHA ever used.  There is no SHA usage anywhere in the above processes, at all. 

 

NETWORK STUFF

Now passing authentication material across the network is an entirely different situation!

I’ll start with the bad news.  Remember earlier when I talked about brute-forcing the decrypted NT hash of another user?  Well that last step is often not even necessary.  NT hashes are password-equivalent, meaning that if I give Windows your hash, it’s as good as giving Windows your password in certain scenarios.  I don’t even need to know what your actual password is.  This is the pass-the-hash attack that you might have heard of.  But it’s not as bad as it sounds.

The good news is that neither Windows nor Active Directory ever sends your bare NT hash over the wire during network transmission.  And you cannot begin a pass-the-hash attack until you’ve already taken over administrative control of some domain joined machine.  That means there is no such thing as using pass-the-hash to own an entire networked AD environment just from an anonymous observer sniffing network packets.  That’s not how it works.  (Fun video on PtH)

Now that we know what an NT hash is, it’s a good time to draw the distinction that whenever we talk about specifically NTLMv1 and NTLMv2, we’re not actually talking about NT hashes anymore.  We’re talking about network communication protocols.  The whole mess is often just called NTLM as a blanket term because it’s all implemented by Microsoft products and it’s all interrelated.

Both NTLMv1 and NTLMv2 are challenge-response protocols, where Client and Server challenge and respond with each other such that Server can prove that Client knows what his/her password is, without ever actually sending the password or its hash directly over the network.  This is because Server already knows Client’s password (hash), either because it’s stored in the local SAM in the case of local Windows accounts, or because Server can forward Client’s data to a domain controller, and the domain controller can verify and respond to Server with “Yep, that’s his password, alright.”

With NTLMv1, you’ll see some usage of DES during network communication. 

With NTLMv2, you’ll see some usage of HMAC-MD5.

There’s also NTLM2 Session, aka NTLMv2 With Session Security, but it uses the same encryption and hashing algorithms as NTLMv2.

It is possible to completely remove the usage of NTLM network protocols from an Active Directory domain and go pure Kerberos, but it will break many applications.  Here is a fantastic article written by one of my favorite Microsoft employees about doing just that.

So let’s assume that hypothetically we blocked all usage of NTLM network protocols and went pure Kerberos. Kerberos in AD supports only the following encryption:

DES_CBC_CRC    (Disabled by default as of Win7/2008R2)[Source]
DES_CBC_MD5    (Disabled by default as of Win7/2008R2)
RC4_HMAC_MD5   (Disabled by default as of Win7/2008R2)
AES256-CTS-HMAC-SHA1-96
AES128-CTS-HMAC-SHA1-96
RC4-HMAC
Future encryption types

 

Of course, there are plenty of other Windows applications that pass authentication traffic over the network besides just AD.  Remote Desktop is a great example.  Remote Desktop traditionally uses RC4, but modern versions of Remote Desktop will negotiate a Transport Layer Security (TLS) connection wherever possible.  (Also known as Network Level Authentication (NLA).)   This is great news because this TLS connection uses the computer’s digital certificate, and that certificate can be automatically created and assigned to the computer by an Enterprise Certificate Authority, and that certificate can be capable of SHA256, SHA384, etc.  However the Certificate Authority administrator defines it.

If you turn on FIPS mode, Remote Desktop can only use TLS 1.0 (as opposed to SSL) when NLA is negotiated, and it can only use 3DES_CBC instead of RC4 when TLS is not negotiated.

Other ciphers that are turned off when FIPS mode is turned on include:

-          TLS_RSA_WITH_RC4_128_SHA

-          TLS_RSA_WITH_RC4_128_MD5

-          SSL_CK_RC4_128_WITH_MD5

-          SSL_CK_DES_192_EDE3_CBC_WITH_MD5

-          TLS_RSA_WITH_NULL_MD5

-          TLS_RSA_WITH_NULL_SHA

 

That will apply to all applications running on Windows that rely upon Schannel.dll.  The application will crash if it calls upon one of the above ciphers when FIPS mode is enabled.

So anyway, that’s about all I got right now.  If you made it to the bottom of  this post I should probably buy you a drink!

 

FIPS 140

by Ryan 29. October 2013 21:23

FIPS 140-2 Logo

Oh yeah, I have a blog! I almost forgot.  I've been busy working.  Let's talk about an extraordinarily fascinating topic: Federal compliance!

FIPS (Federal Information Processing Standard) has many different standards.  FIPS holds sway mainly in the U.S. and Canada.  Within each standard, there are multiple revisions and multiple levels of classification.  FIPS 140 is about encryption and hashing algorithms.  It’s about accrediting cryptographic modules.  Here’s an example of a certificate.  The FIPS 140-2 revision is the current standard, and FIPS 140-3 is under development with no announced release date yet.  It does not matter if your homebrew cryptography is technically “better” than anything else ever.  If your cryptographic module has not gone through the code submission and certification process, then it is not FIPS-approved.  You have to submit your source code/device/module to the government, in order to gain FIPS approval.  Even if you have the most amazing cryptography the world has ever seen, it is still not FIPS approved or compliant until it goes through the process.  In fact, the government is free to certify weaker algorithms in favor of stronger ones just because the weaker algorithms have undergone the certification process when the stronger ones have not, and they have historically done so.  (Triple-DES being the prime example.)

There is even a welcome kit, with stickers.  You need to put these tamper-proof stickers on your stuff for certain levels of FIPS compliance.

So if you are ever writing any software of your own, please do not try to roll your own cryptography. Use the approved libraries that have already gone through certification. Your custom crypto has about a 100% chance of being worse than AES/SHA (NSA backdoors notwithstanding,) and it will never be certifiable for use in a secure Federal environment anyway.  Also avoid things like re-hashing your hash with another hashing algorithm in attempt to be ‘clever’ – doing so can ironically make your hash weaker.

And the Feds are picky.  For instance, if programming for Windows in .NET, the use of System.Security.Cryptography.SHA1 classes may be acceptable while the use of System.Security.Cryptography.SHA1Managed classes are not acceptable.  It doesn’t mean the methods in the SHA1Managed classes are any worse, it simply means Microsoft has not submitted them for approval. 

Many major vendors such as Microsoft and Cisco go through this process for every new version of product that they release.  It costs money and time to get your product FIPS-certified.  Maybe it’s a Cisco ASA appliance, or maybe it’s a simple Windows DLL. 

The most recent publication of FIPS 140-2 Annex A lists approved security functions (algorithms.)  It lists AES and SHA-1 as acceptable, among others. So if your application uses only approved implementations of AES and SHA-1 algorithms, then that application should be acceptable according to FIPS 140-2.  If your application uses an MD5 hashing algorithm during communication, that product is NOT acceptable for use in an environment where FIPS compliance must be maintained. 

However, there is also this contradictory quote from NIST:

“The U.S. National Institute of Standards and Technology says, "Federal agencies should stop using SHA-1 for...applications that require collision resistance as soon as practical, and must use the SHA-2 family of hash functions for these applications after 2010" [22]”

So it seems to me that there are contradictory government statements regarding the usage of security functions.  The most recent draft of FIPS 140-2 Annex A clearly lists SHA-1 as an acceptable hashing algorithm, yet, the quote from NIST says that government agencies must use only SHA-2 after 2010.  Not sure what the answer is to that. 

These algorithms can be broken up into two categories: encryption algorithms and hashing algorithms.  An example of a FIPS encryption algorithm is AES (which consists of three members of the Rijndael family of ciphers, adopted in 2001, and has a much cooler name.)  Encryption algorithms can be reversed/decrypted, that is, converted back into their original form from before they were encrypted.

Hashing algorithms on the other hand, are also known as one-way functions.  They are mathematically one-way and cannot be reversed.  Once you hash something, you cannot “un-hash” it, no matter how much computing power you have.  Hashing algorithms take any amount of data, of an arbitrary size, and mathematically map it to a “hash” of fixed length.  For instance, the SHA-256 algorithm will map any chunk of data, whether it be 10 bytes or 2 gigabytes, into a 256 bit hash.  Always 256 bit output, no matter the size of the input.

This is why the hash of a password is generally considered decently secure, because there is NO way to reverse the hash, so you can pass that hash to someone else via insecure means (e.g. over a network connection,) and if the other person knows what your password should be, then they can know that the hash you gave them proves that you know the actual password.  That's a bit of a simplification, but it gets the point across.

If you were trying to attack a hash, all you can do, if you know what hash algorithm was used, is to keep feeding that same hash algorithm new inputs, maybe millions or billions of new inputs a second, and hope that maybe you can reproduce the same hash.  If you can reproduce the same hash, then you know your input was the same as the original ‘plaintext’ that you were trying to figure out.  Maybe it was somebody’s password.  This is the essence of a ‘brute-force’ attack against a password hash.

Logically, if all inputs regardless of size, are mapped to a fixed size, then it stands to reason that there must be multiple sets of data that, when hashed, result in the same hash.  These are known as hash collisions.  They are very rare, but they are very bad, and collisions are the reason we needed to migrate away from the MD5 hashing algorithm, and we will eventually need to migrate away from the SHA-1 hashing algorithm.  (No collisions have been found in SHA-1 yet that I know of.)  Imagine if I could create a fake SSL certificate that, when I creatively flipped a few bits here and there, resulted in the same hash as a popular globally trusted certificate!  That would be very bad.

Also worth noting is that SHA-2 is an umbrella term, that includes SHA256, SHA384, SHA512, etc.

FIPS 140 is only concerned with algorithms used for external communication.  Any communication outside of the application or module, whether that be network communication, or communication to another application on the same system, etc.  FIPS 140 is not concerned with algorithms used to handle data within the application itself, within its own private memory, that never leaves the application and cannot be accessed by unauthorized users.  Here is an excerpt from the 140-2 standard to back up my claim:

“Cryptographic keys stored within a cryptographic module shall be stored either in plaintext form or encrypted form. Plaintext secret and private keys shall not be accessible from outside the cryptographic module to unauthorized operators…”

Let’s use Active Directory as an example.  This is why, when someone gets concerned about what algorithms AD uses internally, you should refer them to the above paragraph and tell them not to worry about it.  Even if it were plaintext (it’s not, but even if hypothetically it were,) it isn’t in scope for FIPS because it is internal only to the application.  When Active Directory and its domain members are operated in FIPS mode, connections made via Schannel.dll, Remote Desktop, etc., will only use FIPS compliant algorithms. If you had applications before that make calls to non-FIPS crypto libraries, those applications will now crash.

Another loophole that has appeared to satisfy FIPS requirements in the past, is wrapping a weaker algorithm inside of a stronger one.  For instance, a classic implementation of the RADIUS protocol utilizes the MD5 hashing algorithm during network communications.  MD5 is a big no-no.  However, see this excerpt from Cisco:

“RADIUS keywrap support is an extension of the RADIUS protocol. It provides a FIPS-certifiable means for the Cisco Access Control Server (ACS) to authenticate RADIUS messages and distribute session keys.”

So by simply wrapping weaker RADIUS keys inside of AES, it becomes FIPS-certifiable once again.  It would seem to follow that this logic also applies when using TLS and IPsec, as they are able to use very strong algorithms (such as SHA-2) that most applications do not natively support.

So with all that said, if you need the highest levels of network security, you need 802.1x and IPsec if you need to protect all those applications that can't protect themselves.

Bare Minimum Required to Promote a Domain Controller Into a Domain

by Ryan 13. October 2013 13:17

Hiya,

This is something I meant to blog about months ago, but for some reason I let it slip my mind. It just came up again in a conversation I had yesterday, and I couldn't believe I forgot to post it here. (It also may or may not be similar to a test question that someone might encounter if he or she were taking some Microsoft-centric certification tests.)

It started when someone on ServerFault asked the question, "Do you need a GC online to DCPROMO?"

Well the short answer to that question is that no, you don't need a global catalog online (or reachable) from the computer you are trying to simply promote into a domain controller. But that got me thinking, I'd like to go a step farther and see for myself what the bare minimum requirements for promoting a computer to a domain controller in an existing domain, especially concerning the accessibility of certain FSMO roles from the new DC. I don't care about anything else right now (such as how useful this DC might be after it's promoted) except for just successfully completing the DCPromo process.

On one hand, this might seem like just a silly theoretical exercise, but on the other hand, you just might want to have this knowledge if you ever work in a large enterprise environment where your network is not fully routed, and all DCs are not fully meshed. You might need to create a domain controller in a segment of the network where it has network connectivity to some other DCs, but not all of them.

Well I have a fine lab handy, so let's get this show on the road.

  1. Create three computers.
  2. Make two of them DCs for the same single-domain forest (of the 2008+ variety.)
  3. Make only one of them a global catalog.
  4. Leave all FSMOs on the first domain controller, for now.

So when you promote a writable domain controller, you need two things: another writable domain controller online from which to replicate the directory, and your first RID pool allocation directly from the RID pool FSMO role holder. When you promote an RODC, you don't even need the RIDs, since RODCs don't create objects or outbound replicate.  If the computer cannot reach the RID pool master, as in direct RPC connectivity, DCPROMO will give you this message:

You will not be able to install a writable replica domain controller at this time because the RID master DC1.domain.com is offline.

But you can still create an RODC, as long as the domain controller with whom you can communicate is not also an RODC - it has to be a RWDC.

So the final steps to prove this theory are:

  1. Transfer only the RID master to the second domain controller.
  2. Power down the first domain controller.

At this point, only the RID pool master is online, and no global catalog is online. Now run DCPromo on your third computer. Can you successfully promote the new domain controller as a RWDC?

Yes you can.

Now, you'll encounter some other problems down the road, such as the new DC not being able to process password changes because it cannot contact the PDCe, but you've successfully added a new domain controller to the domain nonetheless.

DNS over HTTP

by Ryan 31. March 2013 12:39

I was discussing with some fellow IT admins, the topic of blocking certain websites so that employees or students couldn't access them from the work or school network.  This is a pretty common topic for IT in most workplaces.  However, I personally don't want to be involved in it.  I realize that at some places, like schools for instance, filtering of some websites may be a legal or policy requirement.  But at the workplace, if an employee wants to waste company time on espn.com, that is an issue for HR and management to take up with that employee.  And again in my opinion, it's not about how much time an employee spends on ESPN or Reddit either, but simply whether that employee delivers satisfactory results.  I don't want to handle a people problem with a technical solution.  I don't want to be the IT guy that derives secret pleasure from blocking everyone from looking up their fantasy football scores.  (Or whatever it is people do on espn.com.)  I could spend my entire career until I retire working on a web proxy, blocking each and every new porn site that pops up.  If there's one thing the internet has taught me, it's that there will always be an infinite number of new porn sites.

On the other extreme of black listing, someone then suggested white listing.  Specifically, implementing "DNS white listing" in their environment for the purpose of restricting what internet sites users were allowed to access to only a handful of internet sites.  Well that is a terrible idea.  The only proper way of doing this in my opinion is to use a real web proxy, such as ISA or TMG or Squid.  But I could not help but imagine how I might implement such a system, and then how I might go about circumventing it from the perspective of a user.

OK, well for my first half-baked idea, I can imagine standing up a DNS server, disabling recursion/forwarders on that DNS server, and putting my "white list" of records on that DNS server.  Then, by way of firewall, block all port 53 access to any other IP except my special DNS server.  Congratulations, you just made your users miserable, and have done almost nothing to actually improve the security of your network or prevent people from accessing other sites.  Now the users just have to find another way of acquiring IP addresses for sites that aren't on your white list.

Well how do I get name resolution back if I can't use my DNS server?  I have an idea... DNS over HTTP!

The guys at StatDNS have already thought about this.  And what's awesome, is that they've created a web API for resolving names to IPs over HTTP.  Here's what I did in 5 minutes of Powershell:

PS C:\> Function Get-ARecordOverHTTP([string]$Query) { $($($(Invoke-WebRequest http://api.statdns.com/$Query/a).Content | ConvertFrom-Json).Answer).rdata }

PS C:\> Get-ARecordOverHTTP google.com
173.194.70.101
173.194.70.100
173.194.70.138
173.194.70.102
173.194.70.139
173.194.70.113

PS C:\> Get-ARecordOverHTTP myotherpcisacloud.com
168.61.52.184

Simple as that. How cool is Powershell, seriously?  One line to create a function that accepts a name and returns a list of IPs by interacting with an internet web service.  Pretty awesome if you ask me.

As long as you have port 80 open to StatDNS, you have internet name resolution.  Now, to wrap this into a .NET-based Windows service...

Why Are You Talking To THAT Domain Controller!?

by Ryan 9. February 2013 11:05

I was in Salt Lake City most of this week. Being surrounded by stark snow-covered mountains made for some wonderful scenery... it could not be more different than it is here in Texas. Plus I got to meet and greet with a bunch of Novell and NetIQ people. And eat an enormous bone-in ribeye that no human being has any business eating in one sitting.

But anyway, here's a little AD mystery I ran in to a couple weeks ago, and it may not be as simple as you first think.

As Active Directory admins, something we're probably all familiar with is member servers authenticating with the "wrong" domain controller. By wrong, I mean a DC that is in a different site than the member server, when there's a perfectly fine DC right there in the same site as the member server, and so the member server is incurring cross-site communication when it doesn't need to be. Everything might still function well enough as long as the communication between DC and member server is successful, but now you're saturating your slower inter-site WAN links with AD traffic when you don't need to be. You should want your AD replication, group policy application, DFS referrals, etc., to run like a well-oiled machine.

I often work in a huge environment with AD sites in many countries and on multiple continents, and thousands of little /26 subnets that can't always be easily grouped into a predictable supernet for the purposes of linking subnets to sites in AD Sites & Subnets. So I'm always alert to the fact that if I log on to a server, and I notice that logon takes an abnormally long time, I very well could be logging on to the wrong DC. First, I run set log to see which DC I have logged on to:

set log*DC01 is in Amsterdam*

So in this case, I noticed that while I had logged on to a member server in Dallas, that server's logon server was a DC in Europe. :(

You immediately think "The server's IP subnet isn't defined in AD Sites & Services or is associated to the wrong site," don't you?  Yeah, me too. So I went and checked. Lo and behold, the server's IP subnet was properly defined and associated to the correct site in AD.

Now we have a puzzle. Back on the member server, I run nltest /dsgetsite to verify that the domain member does know to which site it belongs. (Which the domain member's NetLogon service stores in the registry in the DynamicSiteName value once it's discovered.)

I also ran nltest /dsgetdc:domain.com /Account:server01$ to essentially emulate the DC locator and selection process for that server, which basically just confirmed what we already knew:

C:\Users\Administrator>nltest /dsgetdc:domain.com /Account:server01$ 
           DC: \\DC01.DOMAIN.COM (In Amsterdam) 
      Address: \\10.0.2.55 
     Dom Guid: blah-blah-blah 
     Dom Name: DOMAIN.COM 
  Forest Name: DOMAIN.COM 
 Dc Site Name: Amsterdam 
Our Site Name: Arlington 
        Flags: GC DS LDAP KDC TIMESERV WRITABLE DNS_DC DNS_DOMAIN DNS_FOREST FUL
L_SECRET WS 
The command completed successfully

So where do we look next if there's no problem with the IP subnets in AD Sites & Services?  I'm going with DNS. We know that domain controllers register site-specific SRV records so that clients who know to which site they belong will know what DNS query to make to find domain controllers specific to their own site.  So what DNS records did we find for the Arlington site?

Forward Lookup Zones
    _msdcs.domain.com
        dc
            _sites
                Arlington
                    _kerberos SRV NewYorkDC
                    _kerberos SRV SanDiegoDC
                    _kerberos SRV MadridDC
                    _kerberos SRV ArlingtonDC
                    _ldap     SRV NewYorkDC
                    _ldap     SRV SanDiegoDC
                    _ldap     SRV MadridDC
                    _ldap     SRV ArlingtonDC

OK, now things are getting weird.  All of these other domain controllers that are not part of the Arlington site have registered their SRV records in the Arlington site.  The only way I can imagine that happening is because of Automatic Site Coverage, whereby domain controllers will register their own SRV records into sites where it is detected that the site has no domain controllers of its own... combined with the fact that scavenging is turned off for the DNS server, including the _msdcs zone.  So someone, once upon a time, must have created the Arlington site in AD before the actual domain controllers for Arlington were ready.  What's more is that Automatic Site Coverage is supposed to intelligently use site link costing so that only the domain controllers in the next closest site provide "coverage" for the site with no DCs, not every DC in the domain. Turns out the domain did not have a site link strategy either - it used DEFAULTIPSITELINK for everything - the entire global infrastructure. So even after Arlington did get some domain controllers, the SRV records from all the other DCs stayed there because of no scavenging.

Here's the thing though - did you notice that almost every other domain controller in the domain had SRV records registered in the Arlington site, except for the domain controller in Amsterdam that our member server actually authenticated to!?

This is getting kinda' nuts.  So what else, besides the DNS query, does a member server perform in order to locate a suitable domain controller?

So after a client does a DNS query for _ldap._tcp.SITENAME._sites.ForestDnsZones.domain.com, and gets a response, the client then begins to do LDAP queries against the DCs given in the DNS response to make sure that the DCs are alive and servicing requests. If you want to see this for yourself, I recommend starting Wireshark, and then restarting the NetLogon service while the capture is running. If it turns out that none of the DCs in the list that was returned by the site-specific DNS query is responding to your LDAP queries, then the client has to back up and try again with a domain-wide query.

And that is what was happening. The client, server01, was getting a list of DCs for its site, even ones that were erroneously there, but I confirmed that it was unable to contact any of those domain controllers over port 389. So after that failed, the server was forced to try again with a domain-wide query, where it finally found one domain controller that it could perform an LDAP query on... a domain controller in Amsterdam.

Moral of the story: Always blame the network guys.

 

Sometimes I Can Access the WebDAV Share, Sometimes I Can't!

by Ryan 13. November 2012 10:19

You probably already know that all of the Sysinternals tools, such as Process Monitor, Process Explorer, Autoruns, and much more, can be accessed via "shared folder" from any computer connected to the internet by navigating to \\live.sysinternals.com\.  This isn't the same kind of share you'd create if you just shared a folder on your PC.  It's a WebDAV share, and is accessed over HTTP.

Sometimes though, I feel the need to access this share from the command line, either in the Cmd shell or Powershell.  Alas, here's what I see:

Network path not found*Path not found.*

I get the same result with Powershell. Bummer. Well I know I can access the path with Explorer when I type that same UNC into the address bar, or if I just type the UNC into the Run dialog box, so this must just be a limitation of those command-line tools, right?

It works in Explorer*Works fine in Explorer*

Oh well... but wait. Now having successfully accessed the network path with Explorer, let me now immediately go back to the Cmd shell and try it again:

 

Now it works in Cmd too!*Now it works in Cmd too!*

OK, now accessing the network path works fine from the Cmd shell and from Powershell, even though all I did was access it through Explorer first, and then try again. Now I just have to know what the heck is going on... and to do that, I need to use Process Monitor. Which, amusingly, is in the WebDAV share I'm trying to access. But I'll run a local copy.

I started the trace. Here's my first attempt to access the network path with Cmd.exe, which failed:

Cmd.exe network path not found*Network path not found*

This was the very first time in the Process Monitor trace when the string "live.sysinternals.com" appeared in the Path field. It's also the first time the Cmd.exe process shows up in the trace. It's currently filtered to only include events where the Path field contains the string live.sysinternals.com. The really interesting part about this is that it appears the moment I pressed Enter on the command line, Explorer.exe was the first process to be involved, not the process I was interacting with! That's odd. Maybe a file system filter driver intercepted the call and notified Explorer? It looks like Explorer is looking for something related to named pipes and the Workstation Service (wkssvc) on the remote server, but it doesn't find it.  Then Cmd.exe first checked my local file system for a file in Windows\CSC\ directory, which it didn't find, and then it tried to access the network path that I actually asked for, which resulted in "Bad network path." Then it apparently tries again with the same local file system path, and then again with the network directory instead of the specific executable name.  All failed. "Network path not found," my command prompt tells me. But with no further input from me, Explorer takes off doing its own thing, calling cscapi.dll and loading things in the background and sending things over network. All I did was hit enter in the Command Prompt above.

So what is this CSC directory? Googling the term led me to an old post on Raymond Chen's blog. Client Side Caching. OK, so apparently both processes are looking for a cached or offline version of the network path.

Then I move over to the Explorer.exe window and type the path into the address bar. Explorer looks for some more CSC stuff first, and then svchost.exe starts communicating with the remote server over TCP. There's a lot of loading of WebDAVRedirector stuff. Finally, after a lot of work, I start seeing events like these from Explorer:

Explorer finds it, finally*Explorer starts finding it, finally*

Notice that Explorer also seems to be storing the autoruns executable in a temporary "Tfs_DAV" directory on my workstation.

Finally, after having success with Explorer, I go right back to the Command Prompt and try it again. This time, the trace looks like this:

Works in cmd.exe now too

Now I see svchost.exe stepping in with a WebDavRedirector, and cmd.exe getting some successful returns from its IRPs. Finally, after playing around in that Tfs_DAV directory and some more intermingling of svchost.exe and the System process both helping out, the process autoruns.exe finally launches.

So that's a pretty fast and loose overview of what is actually going on. The entire trace was a beast to wade through, and there is obviously a lot of orchestration and cooperation required between many different Windows components required to allow you to access a WebDAV share from within Cmd.exe and I don't fully understand all of it... but the bottom line is that at least on my Windows 7 SP1 x64 workstation, it looks like Explorer.exe is smart enough to read from a WebDAV share and cache the data locally, whereas Cmd.exe is only smart enough to read the data locally, if and only if it's already cached locally... or perhaps the redirector had to be "woken up" by Explorer first, before Cmd.exe was able to use it.

Finally, I'll leave off with a bit about the WebDAV Mini-Redirector from Wikipedia:

"In Windows XP, Microsoft added the Web Client service is also known as the WebDAV mini-redirector[11] which is preferred by default over the old Web folders client. This newer client works as a system service at the network-redirector level (immediately above the file-system), allowing WebDAV shares to be assigned to a drive letter and used by any software. The redirector also allows WebDAV shares to be addressed via UNC paths (e.g. http://host/path/ is converted to\\host\path\) for compatibility with Windows filesystem APIs."

Best-Practices Remediation Tips for Server 2012 Pt I.

by Ryan 11. November 2012 15:19

I'm calling this Part 1 because I realized as I started writing that this is a lot of work, and can easily be split into 2 or more articles.

Like most of the IT Pro community, I've been getting comfortable with Server 2012 the past several weeks now, and the journey is still ongoing. As I talked about last time, I do like those Best Practices Analyzers for Windows Server. Here's me running it in Server Manager:

Best Practices Analyzer Server Manager 2012*BPA in Server Manager 2012*

Getting any of these results back means that I have some work to do in remediating them. It's not uncommon for a Server 2012 system that was just built fresh with no applications loaded or configuration changes to still have one or two compliance issues in the Best Practices Analyzers. There is a balance to be maintained between compatibility and performance optimization. Also, many of these issues that popped up for me personally were not role-specific, but rather apply to a base component of the OS. Now I'll go over some of the interesting ones I've gotten and how I fixed them:

 

[Hyper-V]: Avoid storing Smart Paging files on a system disk.

Smart paging is new with Hyper-V 2012. Read about it here. Basically, you now enter a minimum amount of RAM a virtual machine can have, you enter a maximum amount of RAM a virtual machine can have, and you also now enter the amount of startup RAM a virtual machine can have. The Windows OS can boot up more comfortably with a larger amount of RAM, but once it reaches cruising altitude and is idle, the RAM requirements go back down, which will allow the Hyper-V host to gradually start reclaiming memory from the VM. With all this dynamic shrinking and growing of the memory on all your VMs, that's where the "smart paging file" comes in. And just like you can improve performance by putting your traditional Windows paging file on its own disk, the same goes with the Smart Paging file.

[Hyper-V]: Use RAM that provides error correction.

Microsoft doesn't support Hyper-V environments on hardware that isn't using ECC RAM. This is just a lab using desktop-grade hardware, so there's nothing I can really do about this. If you're using real server gear, this should not be an issue for you.

[Hyper-V]: Virtual machines should be backed up at least once every week.

This one I still don't understand. I have all the guest operating systems backing up nightly, and then I am backing them up again through the backup of the Hyper-V host. So go away, error.

[Windows]: Short file name creation should be disabled.

Then why is it enabled by default? Oh, I know why... it's because of your crappy line-of-business apps that were written back in 1998 that you can't get rid of, right? Well I'm in a pure 2012 environment right now, so I have no such worries. Good bye 8.3 filenames. Change this registry value to 1 to disable short file name creation on all volumes: HKLM\System\CurrentControlSet\Control\FileSystem\NtfsDisable8dot3NameCreation.

[Windows]: IrpStackSize should have the recommended value.

Irp stands for I/O Request Packet. Mark Russinovich did a great job of explaining IRPs and their role in the Windows I/O system in his book Windows Internals. All the various components or layers that an I/O packet traverses on its way to and from a disk, for example, are collectively referred to as "a stack." Each filter driver you add to the file system means that the IrpStackSize needs to be increased in order to accommodate it. A common example of this is when you install an antivirus product that uses a file system filter driver. If your IrpStackSize is set too small, certain operations might fail, such as attempting to access that machine's file system remotely. Conversely, it doesn't need to be set too high, either.  It was at 11 by default on my 2012 systems. The Best Practices Analyzer says it should be at 15, so I'll set it to 15. HKLM\System\CurrentControlSet\Services\LanmanServer\Parameters\IRPStackSize.

 

Alright, that's where I'll leave off for part 1. Part 2 will focus on some more network-specific optimizations, so check back soon!

Native NIC Teaming In Server 2012

by Ryan 4. November 2012 14:18

Built-in NIC teaming was one of my personally most anticipated features of Server 2012.  NIC teaming, whether for redundancy or for more bandwidth, has always been a cool concept and one of the foundations of highly-available systems, but it has historically required 3rd party vendor software to enable.  Probably the most popular example I can think of is the HP Network Configuration Utility software:

HP Network Configuration Utility

Almost every IT pro is going to be familiar with that screen.  Up until now, to team network adapters, one had to use vendor software such as the HP software pictured above. But starting with Windows Server 2012, the ability is built right in to the operating system, bringing the feature to new sets of hardware and without the need for any 3rd party vendor drivers or software! (Also of note is that Microsoft supports their NIC teaming, whereas they do not support the HP Network Configuration Utility.)

You can use the graphical Server Manager to configure NIC teams, but you can also do it all right from within Powershell. And since I typically prefer to keep my servers in straight-up Server Core mode, I wanted to figure out how to do it all from Powershell. My test machine for this experiment is a SuperMicro SYS-5015A-H 1U. It has two embedded GbE adapters (Realtek based.) Before Server 2012, I always just kept one of the NICs disabled since I had no use for it, and no teaming software. But now, I've installed a fresh copy of Windows Server 2012 Standard edition on it. 

Get-NetAdapter*Get-NetAdapter*

To make a team out of these two network adapters, simply do

New-NetLbfoTeam -Name Team -TeamMembers Ethernet,"Ethernet 2"

That's it! (Just put quotes around 'Ethernet 2' because it contains a space.) Now keep in mind that you'll probably have to re-do the IP configuration for your new NIC team now, so you'll want physical or DRAC/ILO access to the machine so you can do that. (Or do it via script. I set the IP configuration on my new NIC team via sconfig.) Here is what the new team looks like in Powershell: 

Get-NetLbfoTeam*Get-NetLbfoTeam*

The TeamingMode and LoadBalancingAlgorithm default to SwitchIndependent and TransportPorts, respectively, but of course can be configured to whatever you want as you create the team with the New-NetLbfoTeam command. Check this Technet article for explanations on the different options and what they do. If you later want to add another NIC to the existing team, you can use the Add-NetLbfoTeamMember command and specify the NIC you want to add.

 

Get-NetLbfoTeamMember*Get-NetLbfoTeamMember*

Beautiful.

Setting Up an IPv6 Tunnel Through Hurricane Electric

by Ryan 1. March 2012 10:17

Hurricane ElectricIt's 2012, and ISPs are still slow to adopt IPv6. It seems like very few of us can say that we have globally-accessible IPv6 addresses. Myotherpcisacloud.com doesn't even have an IPv6 address yet... and that makes me a very sad panda. But there is something I can do about it right now, without the help of my ISP.

If you have a Cisco router, I can show you how to create an IPv6 tunnel that will you have dual-stacked and on the IPv6 Internet in no time! This article assumes that you cannot use native IPv6 out to the Internet, and that you already have the router properly set up and in use in an IPv4 network.

The router I will use in this example is a 2621XM; I bought it for $150 on eBay. It has two FastEthernet ports. It was manufactured in 1999. So any model at least as recent as that should be able to handle this just fine. I do IPv4 NAT between the two FE ports so that the rest of my home network served by my AT&T U-Verse Residential Gateway stays separate from my lab network, but the lab still has to go through the U-Verse gateway to reach the Internet. (U-Verse claims that they'll push an IPv6 firmware upgrade out automatically to all their customers sometime in 2012, but I'll believe it when I see it.)

Cisco 2621xm*There's still some juice left in this crusty old thing*

For this to work for me, I needed to configure my U-Verse Gateway to put my Cisco router in "DMZ+" mode, and allow the outside interface of my Cisco router to receive a DHCP address. This allows my U-Verse gateway to assign my router the same public IPv4 address as itself, and forward all unspecified traffic to it.

We’re going to utilize the free service at Hurricane Electric for this. Follow that link and sign up. It’s their "Tunnel Broker" service that you’re after. After a short quiz, they will give you your very own IPv6 tunnel and your very own IPv6 address space! For free!

All you need to do now is configure your router. If you've never used Cisco IOS, these commands might look weird to you. They're shorthand for things like "enable" - enter "enable" mode which allows us privileged access so that we can make configuration changes to the router. "conf t" is shorthand for "configure terminal" - meaning "I wish to make configuration changes to this router from the terminal."

Router>en
Router#conf t
Router(config)#ipv6 unicast-routing
Router(config)#exit
Router#copy run start

At this point you have enabled ipv6 routing globally on your router. "Copy run start" is shorthand for "copy the running configuration to the startup configuration, effectively making these changes permanent."

Next, create a tunnel on your router like this:

Router>en
Router#conf t
interface Tunnel0
description Hurricane Electric IPv6 Tunnel Broker
no ip address
ipv6 enable
ipv6 address 2001:470:1f0e:5a4::2/64 (Use your side of the endpoint that Hurricane electric gave you!)
tunnel source 75.32.98.76 (Your public IPv4 address)
tunnel destination 216.218.224.42 (Hurricane Electric’s IPv4 endpoint for this tunnel)
tunnel mode ipv6ip
ipv6 route ::/0 Tunnel0
end
write

And you’re pretty much done! Configure your clients with an IPv6 address in that space, and you now have IPv6 connectivity all the way to the Internet. Google has a public DNS server at 2001:4860:4860::8888. Test out your tunnel by trying to ping that address. Remember that IPv6 and IPv4 are quite different. There is no NAT in IPv6. (Let's not talk about NAT64 yet.) Internet communication is the way it was truly meant to be – end to end. That also means the need to protect yourself with firewalls will become more important than ever, since you can’t hide behind a NAT anymore!

Now you can surf the web with a “dual-stack,” meaning that you’re runnnig both IPv4 and IPv6 — and your IPv4 packets will take their normal route, while your IPv6 packets will be diverted through your new tunnel. Seamlessly. Pretty neat huh? Try to ping ipv6.google.com and see what happens!

I guess that’ll have to do until ISPs catch up with IPv6 technology.

Tags:

Networking

About Me

Name: Ryan Ries
Location: Texas, USA
Occupation: Systems Engineer 

I am a Windows engineer and Microsoft advocate, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST


MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.