To Scriptblock or Not to Scriptblock, That is the Question

by Ryan 27. March 2014 17:03

I was doing some work with the Active Directory Powershell cmdlets recently.  Well, I work with them almost every day, but they still get me with their idiosyncrasies from time to time.

I needed to check some group memberships on various privileged groups within the directory.  I'll show you an abridged version of the code I started with to get the point across, the idea of which is that I iterate through a collection of groups (a string array) and perform some actions on each of the Active Directory groups in sequence:

Foreach($ADGroupName In [String[]]'Domain Admins',     `
                                  'Enterprise Admins', `
                                  'Administrators',    `
                                  'Account Operators', `
                                  'Backup Operators')
    $Group = Get-ADGroup -Filter { Name -EQ $ADGroupName } -Properties Members
    If ($Group -EQ $Null -OR $Group.PropertyNames -NotContains 'Members')
        # This condition only occurs on the first iteration of the Foreach loop!
        Write-Error "$ADGroupName was null or was missing the Members property!"
        Write-Host "$ADGroupName contains $($Group.Members.Count) members." 

Before I continue, I'd just like to mention that I typically do not mind very long lines of code nor do I avoid verbose variable names, mostly because I'm always using a widescreen monitor these days.  Long gone are the days of the 80-character-wide terminal.  And I agree with Don Jones that backticks (the Powershell escape character) are to be avoided on aesthetic grounds, but for sharing code in formats that are less conducive to long lines of code, such as this blog or a StackExchange site with their skinny content columns, I'll throw some backticks in to keep the horizontal scrolling to a minimum. 

Anyway, the above code exhibited the strangest bug. At least I'd call it a bug. (I'll make sure and let Jeffrey Snover know next time I see him. ;P) Only on the first iteration of the Foreach loop, I would get the "error" condition instead of the expected "Domain Admins contains 5 members" output.  The remaining iterations all behaved as expected.  It did not matter in what order I rearranged the list of AD groups; I always got an error on the first element in the array.

For a moment, I settled on working around the "bug" by making a "Dummy Group," including that as the first item in the array, gracefully handling the expected exception because Dummy Group did not exist, and then continuing normally with the rest of the legitimate groups.  This worked fine, but it didn't sit well with me.  Not exactly my idea of production-quality code.  I wanted to find the root cause of this strange behavior.

Stackoverflow has made me lazy.  Apparently I go to Serverfault when I want to answer questions, and Stackoverflow when I want other people to answer questions for me.  Simply changing line 7 above to this:

$Group = Get-ADGroup -Filter "Name -EQ '$ADGroupName'" -Properties Members

Made all the difference.  That is, using a string with an expandable variable inside it instead of the script block for a filter.  (Which itself is a little confusing since single quotes (') usually indicate non-expandable variables.  Oh well.  Just syntax to remember when playing with these cmdlets.

Nevertheless, if the code would not work correctly with a script block, I wish the parser would mark it as a syntax error, instead of acting weird.  (Behavior exists in PS 2 and PS 4, though in PS 4 the missing property is just ignored and I get 0 members, which is even worse.)

Powershell, Panchromatic Edition, Continued!

by Ryan 7. March 2014 18:03

That is a weird title.  Anyway, this post is a continuation to my last post, here, in which I used Powershell to create a bitmap that contained each and every RGB color (24-bit, ~16.7 million colors) exactly once.  We learned that using dynamic arrays and the += operator are often not a good choice when working with large amounts of data that you'd like to see processed before your grandchildren graduate high school. Today we look at another performance pitfall.

So last time, I printed a 4096x4096 bitmap containing 16777216 colors. But the pixels were printed out in a very uniform, boring manner.  I wanted to at least see if I could randomize the pixels a little bit to make the image more interesting.  First, I attempted to do that like this:

Function Shuffle([System.Collections.ObjectModel.Collection[System.Drawing.Color]]$List)
    $NewList = New-Object 'System.Collections.ObjectModel.Collection[System.Drawing.Color]'
    While ($List.Count -GT 0)
        [Int]$RandomIndex = Get-Random -Minimum 0 -Maximum $List.Count                
        Write-Progress -Activity "Randomizing Pixels..." -Status "$($NewList.Count) of 16777216"
    Return $NewList

Seems pretty solid, right?  I intend to shuffle or randomize the neatly ordered list of pixels that I've generated.  So I pass that neatly ordered list to a Shuffle function.  The Shuffle function randomly plucks an element out of the original list one at a time, inserts it into a new "shuffled" list, then removes the original element from the old list so that it is not reused. Finally, it returns the new shuffled list.

Yeah... so that runs at about 12 pixels per second.

So instead of waiting 16 days for that complete, (16.7 million elements at 12 per second...)  I decided that I had to come up with a better solution.  I thought on it, and I almost resorted to writing a pure C# type and adding that to my script using Add-Type, but then I decided that would be "cheating" since I wanted to write this in Powershell as best I could.

Then it suddenly hit me: maybe I was thinking about it this way too hard.  Let's try something crazy:

Write-Progress -Activity "Randomizing Pixels" -Status "Please wait..."
$RandomPixelList = $AllRGBCombinations | Get-Random -Count $AllRGBCombinations.Count

Done in about two minutes, which beats the hell out of 16 days.  What we have now is a "randomized" list of pixels. Let's paint them and see how it looks:

A slice at 1x magnification:

A slice at 6x magnification:

I call it the "Cosmic Microwave Background."

You'll likely see a third installment in this series as I work some interesting algorithm into the mix so that the image is more interesting than just a random spray of pixels.  Until then...

Powershell Dynamic Arrays Will Murder You, Also... A Pretty Picture! (Part 1 of X)

by Ryan 5. March 2014 20:31

I was browsing the web a couple days ago and I saw this post, which I thought was a fun idea.  I didn't want to look at his code though, because, kind of like reading movie spoilers, I wanted to see if I could do something similar myself first before spoiling the fun of figuring it out on my own. The idea is that you create an image that contains every RGB color, using each color only once.

Plus I decided that I wanted to do it in Powershell, because I'm a masochist.

There are 256 x 256 x 256 RGB colors (in a 24-bit spectrum.) That equals 16777216 colors. Since each color will be used only once, if I only use 1 pixel per color, a 4096 x 4096 image would be capable of containing exactly 16777216 different colors.

First I thought to just generate a random color, draw the pixel in that color, then add it to a list of "already used" colors, and then on the next iteration, just keep generating random colors in a loop until I happen upon one that wasn't in my list of already used colors. But I quickly realized this would be horribly inefficient and slow.  Imagine: to generate that last pixel, I'd be running through the loop all day hoping for the 1 in ~16.7 million chance that I got the last remaining color that hadn't already been used. Awful idea.

So instead let's just generate a big fat non-random array of all 16777216 RGB colors:

$AllRGBCombinations = @()
For ([Int]$Red = 0; $Red -LT 256; $Red++)
    For ([Int]$Green = 0; $Green -LT 256; $Green++)
        For ([Int]$Blue = 0; $Blue -LT 256; $Blue++)
            $AllRGBCombinations += [System.Drawing.Color]::FromArgb($Red, $Green, $Blue)

That does generate an array of 16777216 differently-colored and neatly-ordered pixel objects... but guess how long it takes?

*... the following day...*

Well, I wish I could tell you, but I killed the script after it ran for about 20 hours. I put a progress bar in just to check that it wasn't hung or in an endless loop, and it wasn't... the code is just really that slow.  It starts out at a decent pace and then gradually slows to a crawl.

Ugh, those dynamic arrays and the += operator strike again. I suspect it's because the above method recreates and resizes the array every iteration... like good ole' ReDim back in the VBscript days.  It may be handy for small bits of data, but if you're dealing with large amounts of data, that you want processed this decade, you better strongly type your stuff and use lists.  Let's try the above code another way:

$AllRGBCombinations = New-Object 'System.Collections.ObjectModel.Collection[System.Drawing.Color]'
For ([Int]$Red = 0; $Red -LT 256; $Red++)
    For ([Int]$Green = 0; $Green -LT 256; $Green++)
        For ([Int]$Blue = 0; $Blue -LT 256; $Blue++)
            $AllRGBCombinations.Add([System.Drawing.Color]::FromArgb($Red, $Green, $Blue))
    $PixelsGenerated += 65536
    Write-Progress -Activity "Generating Pixels..." -Status "$PixelsGenerated / 16777216" -PercentComplete (($PixelsGenerated / 16777216) * 100)

Only 5.2 seconds in comparison, including the added overhead of writing the progress bar. Notice how I only update the progress bar once every 256 * 256 pixels, because it will slow you down a lot if you try to update the progress bar after every single pixel is created.

Now I can go ahead and generate an image containing exactly one of every color that looks like this:

Yep, there really are 16.7 million different colors in there, which is why even a shrunken PNG of it is 384KB.  Hard to compress an image when there are NO identical colors! The original 4096x4096 bitmap is ~36MB.  And I ended up loading a shrunken and compressed JPG for this blog post, because I didn't want every page hit consuming a meg of bandwidth.

It kinda' makes you think about how limited and narrow a human's vision is, doesn't it?  24-bit color seems to be fine for us when watching movies or playing video games, but that image doesn't seem to capture how impressive our breadth of vision should be.

Next time, we try to randomize our set of pixels a little, and try to make a prettier, less computerized-looking picture... but still only using each color only once.  See you soon.

Verifying RPC Network Connectivity Like A Boss

by Ryan 16. February 2014 10:02

Aloha.  I did some fun Powershelling yesterday and now it's time to share.

If you work in an IT environment that's of any significant size, chances are you have firewalls.  Maybe lots and lots of firewalls. RPC can be a particularly difficult network protocol to work with when it comes to making sure all the ports necessary for its operation are open on your firewalls. I've found that firewall guys sometimes have a hard time allowing the application guy's RPC traffic through their firewalls because of its dynamic nature. Sometimes the application guys don't really know how RPC works, so they don't really know what to ask of the firewall guys.  And to make it even worse, RPC errors can be hard to diagnose.  For instance, the classic RPC error 1722 (0x6BA) - "The RPC server is unavailable" sounds like a network problem at first, but can actually mean access denied, or DNS resolution failure, etc.

MSRPC, or Microsoft Remote Procedure Call, is Microsoft's implementation of DCE (Distributed Computing Environment) RPC. It's been around a long time and is pervasive in an environment containing Windows computers. Tons of Windows applications and components depend on it.

A very brief summary of how the protocol works: There is an "endpoint mapper" that runs on TCP port 135. You can bind to that port on a remote computer anonymously and enumerate all the various RPC services available on that computer.  The services may be using named pipes or TCP/IP.  Named pipes will use port 445.  The services that are using TCP are each dynamically allocated their own TCP ports, which are drawn from a pool of port numbers. This pool of port numbers is by default 1024-5000 on XP/2003 and below, and 49152-65535 on Vista/2008 and above. (The ephemeral port range.) You can customize that port range that RPC will use if you wish, like so:

reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v Ports /t REG_MULTI_SZ /f /d 8000-9000
reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v PortsInternetAvailable /t REG_SZ /f /d Y
reg add HKLM\SOFTWARE\Microsoft\Rpc\Internet /v UseInternetPorts /t REG_SZ /f /d Y


netsh int ipv4 set dynamicport tcp start=8000 num=1001
netsh int ipv4 set dynamicport udp start=8000 num=1001
netsh int ipv6 set dynamicport tcp start=8000 num=1001
netsh int ipv6 set dynamicport udp start=8000 num=1001

This is why we have to query the endpoint mapper first, because we can't just guess exactly which port we need to connect to for a particular service.

So, I wrote a little something in Powershell that will test the network connectivity of a remote machine for RPC, by querying the endpoint mapper, and then querying each port that the endpoint mapper tells me that it's currently using.

#Requires -Version 3
Function Test-RPC
    Param([Parameter(ValueFromPipeline=$True)][String[]]$ComputerName = 'localhost')
        Set-StrictMode -Version Latest
        $PInvokeCode = @'
        using System;
        using System.Collections.Generic;
        using System.Runtime.InteropServices;

        public class Rpc
            // I found this crud in RpcDce.h

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcBindingFromStringBinding(string StringBinding, out IntPtr Binding);

            public static extern int RpcBindingFree(ref IntPtr Binding);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcMgmtEpEltInqBegin(IntPtr EpBinding,
                                                    int InquiryType, // 0x00000000 = RPC_C_EP_ALL_ELTS
                                                    int IfId,
                                                    int VersOption,
                                                    string ObjectUuid,
                                                    out IntPtr InquiryContext);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcMgmtEpEltInqNext(IntPtr InquiryContext,
                                                    out RPC_IF_ID IfId,
                                                    out IntPtr Binding,
                                                    out Guid ObjectUuid,
                                                    out IntPtr Annotation);

            [DllImport("Rpcrt4.dll", CharSet = CharSet.Auto)]
            public static extern int RpcBindingToStringBinding(IntPtr Binding, out IntPtr StringBinding);

            public struct RPC_IF_ID
                public Guid Uuid;
                public ushort VersMajor;
                public ushort VersMinor;

            public static List QueryEPM(string host)
                List ports = new List();
                int retCode = 0; // RPC_S_OK                
                IntPtr bindingHandle = IntPtr.Zero;
                IntPtr inquiryContext = IntPtr.Zero;                
                IntPtr elementBindingHandle = IntPtr.Zero;
                RPC_IF_ID elementIfId;
                Guid elementUuid;
                IntPtr elementAnnotation;

                    retCode = RpcBindingFromStringBinding("ncacn_ip_tcp:" + host, out bindingHandle);
                    if (retCode != 0)
                        throw new Exception("RpcBindingFromStringBinding: " + retCode);

                    retCode = RpcMgmtEpEltInqBegin(bindingHandle, 0, 0, 0, string.Empty, out inquiryContext);
                    if (retCode != 0)
                        throw new Exception("RpcMgmtEpEltInqBegin: " + retCode);
                        IntPtr bindString = IntPtr.Zero;
                        retCode = RpcMgmtEpEltInqNext (inquiryContext, out elementIfId, out elementBindingHandle, out elementUuid, out elementAnnotation);
                        if (retCode != 0)
                            if (retCode == 1772)

                        retCode = RpcBindingToStringBinding(elementBindingHandle, out bindString);
                        if (retCode != 0)
                            throw new Exception("RpcBindingToStringBinding: " + retCode);
                        string s = Marshal.PtrToStringAuto(bindString).Trim().ToLower();
                        RpcBindingFree(ref elementBindingHandle);
                    while (retCode != 1772); // RPC_X_NO_MORE_ENTRIES

                catch(Exception ex)
                    return ports;
                    RpcBindingFree(ref bindingHandle);
                return ports;
        ForEach($Computer In $ComputerName)
                [Bool]$EPMOpen = $False
                $Socket = New-Object Net.Sockets.TcpClient
                    $Socket.Connect($Computer, 135)
                    If ($Socket.Connected)
                        $EPMOpen = $True
                If ($EPMOpen)
                    Add-Type $PInvokeCode
                    $RPCPorts = [Rpc]::QueryEPM($Computer)
                    [Bool]$AllPortsOpen = $True
                    Foreach ($Port In $RPCPorts)
                        $Socket = New-Object Net.Sockets.TcpClient
                            $Socket.Connect($Computer, $Port)
                            If (!$Socket.Connected)
                                $AllPortsOpen = $False
                            $AllPortsOpen = $False

                    [PSObject]@{'ComputerName' = $Computer; 'EndPointMapperOpen' = $EPMOpen; 'RPCPortsInUse' = $RPCPorts; 'AllRPCPortsOpen' = $AllPortsOpen}
                    [PSObject]@{'ComputerName' = $Computer; 'EndPointMapperOpen' = $EPMOpen}


And the output will look a little something like this:

You can also query the endpoint mapper with PortQry.exe -n server01 -e 135, but I was curious about how it worked at a deeper level, so I ended up writing something myself. There weren't many examples of how to use that particular native API, so it was pretty tough.

Website Upgrade, Coding, and Dealing with NTFS ACLs on Server Core

by Ryan 9. January 2014 15:20

I apologize in advance - this blog post is going to be all over the place.  I haven't posted in a while, mainly because I've been engrossed in a personal programming project. Part of which includes a multithreaded web server I wrote over the weekend that I'm kind of proud of. My ShareDiscreetlyWebServer is single-threaded, because when I wrote it, I had not yet grasped the awesome power of the async and await keywords in C#.  They're tr├Ęs sexy. Right now the only verb I support is GET (because it's all I need for now,) but it's about as fast as you could hope a server written in managed code could be.

Secondly, I just upgraded this site to Blogengine.NET v2.9.  The motivation behind it was that today, I got this email from a visitor to the site:

I tried to leave you a comment but it didnt work.
Can you please go through steps you took to migrate your blog to Azure as I am interested in doing the same thing.
Did you set it up as a Azure Web Site or use an Azure VM and deployed it that way?
Are you using BlogEngine or some other blog publishing tool.

Wait, my comments weren't working? Damnit. I tried to post a comment myself and sure enough, commenting on this blog was busted. It was working fine but it just stopped working some time in the last few weeks. And so, I figured that if I was going to go to the trouble of debugging it, I'd just go ahead and upgrade Blogengine.NET while I was at it.

But first, to answer the guy's question above, my blog migration was simple. I used to host this blog out of my house on a Windows Server running in my home office. I signed up for a Server 2012 Azure virtual machine, RDP'ed to it, installed the IIS role, robocopy'd my entire C:\inetpub directory to the new VM, and that was that.

So version 2.9 so far is a little lackluster so far.  They updated most of the UI to the simplistic, sleek "modern" look that's all the rage these days, especially on tablets and phones.  But in the process it appears they've regressed to the point where the editor is no longer compatible with Internet Explorer 11, 10, or 9. (Not that it worked perfectly before either.)  It's annoying as hell. I'm writing this post right now in IE with compatibility mode turned on, and still half of the buttons don't work.  It's crippled compared to the version 2.8 that I was on this morning.

That's ironic that the developers who wrote a CMS entirely in .NET, in Visual Studio, couldn't be bothered to test it on any version of IE.  Guess I'll wait patiently for version 3.0.  Or maybe write my own CMS after I get finished writing the web server to run it on.

But even after the upgrade, and after fixing all the little miscellaneous bugs that the upgrade introduced, it still didn't fix my busted comment system. So I had to dig deeper. I logged on to the server, fired up Process Monitor while I attempted to post a comment: 

w3wp.exe gets an Access Denied error right there, clear as day.  (Thanks again, ProcMon.)

If you open the properties of w3wp.exe, you'll notice that it runs in the security context of an application pool, e.g. "IIS APPPOOL\Default Web Site". So just give that security principal access to that App_Data directory.  Only one problem...

Server Core.

No right-clicking our way out of this one.  Of course we could have done this with cacls.exe or something, but you know I'm all about the Powershell.  So let's do it in PS.

$Acl = Get-Acl C:\inetpub\wwwroot\App_Data
$Ace = New-Object System.Security.AccessControl.FileSystemAccessRule("IIS APPPOOL\Default Web Site", "FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
Set-Acl C:\inetpub\wwwroot\App_Data $Acl

Permissions, and commenting, are back to normal.

IPv4Address Attribute In Get-ADComputer

by Ryan 2. December 2013 12:00

Guten Tag, readers!

Administrators who use Microsoft's Active Directory module for Powershell are most likely familiar with the Get-ADComputer cmdlet.  This cmdlet retrieves information from the Active Directory database about a given computer object.  Seems pretty straightforward, but recently I started wondering about something in Get-ADComputer's output:

Get-ADComputer IPv4Address

IPv4Address?  I don't recall that data being stored in Active Directory... well, not as an attribute of the computer objects themselves, anyway.  If you take a look at a computer object with ADSI Edit, the closest thing you'll find is an ipHostNumber attribute, but it appears to not be used:

ADSI Edit Computer Properties

Hmm... well, by this point, if you're anything like me, you're probably thinking that a DNS query is about the only other way that the cmdlet could be getting this data.  But I wasn't satisfied with just saying "it's DNS, dummy," and forgetting about it.  I wanted to know exactly what was going on under the hood.

So I started by disassembling the entire Microsoft.ActiveDirectory.Management assembly.  (How did I know which assembly to look for?)

After searching the resulting source code for ipv4, it started to become quite clear.  From Microsoft.ActiveDirectory.Management.Commands.ADComputerFactory<T>:

internal static void ToExtendedIPv4(string extendedAttribute, string[] directoryAttributes, ADEntity userObj, ADEntity directoryObj, CmdletSessionInfo cmdletSessionInfo)
  if (directoryObj.Contains(directoryAttributes[0]))
    string dnsHostName = directoryObj[directoryAttributes[0]].Value as string;
    userObj.Add(extendedAttribute, (object) IPUtil.GetIPAddress(dnsHostName, IPUtil.IPVersion.IPv4));
    userObj.Add(extendedAttribute, new ADPropertyValueCollection());

Alright, so now we know that Get-ADComputer is using another class named IPUtil to get the IP address of a computer as it runs. Let's go look at IPUtil:

internal static string GetIPAddress(string dnsHostName, IPUtil.IPVersion ipVersion)
  if (string.IsNullOrEmpty(dnsHostName))
    return (string) null;
    foreach (IPAddress ipAddress in Dns.GetHostEntry(dnsHostName).AddressList)
      if (ipAddress.AddressFamily == (AddressFamily) ipVersion && (ipVersion != IPUtil.IPVersion.IPv6 || !ipAddress.IsIPv6LinkLocal && !ipAddress.IsIPv6SiteLocal))
        return ipAddress.ToString();
    return (string) null;
  catch (SocketException ex)
    return (string) null;

Ahh, there it is.  The ole' trusty, tried and true System.Net.Dns.GetHostEntry() method.  The cmdlet is running that code every time you look up a computer object.  Also notice that the method returns on the first valid IP address that it finds, so we know that this cmdlet isn't going to work very well for computers with multiple IP addresses.  It would have been trivial to make the cmdlet return an array of all valid IP addresses instead, but alas, the Powershell developers did not think that was necessary.  And of course if the DNS query fails for any reason, you simply end up with a null for the IPv4Address field.

I've noticed that Microsoft's Active Directory cmdlets have many little "value-added" attributes baked into their cmdlets, but sometimes they can cause confusion, because you aren't sure where the data is coming from, or the "friendly" name that Powershell ascribes to an attribute doesn't match the attribute's name in Active Directory, etc.

Locating Active Directory Site Options with Powershell

by Ryan 2. October 2013 16:30

So as you may know, I hang out on ServerFault a lot.  And last night, one of my favorite ServerFault members, Mark Marra, asked an interesting question there that sent me on a long journey of research in order to answer it.

(Mark's got his own IT blog by the way which you should totally check out. He's a world class Active Directory guy, the kind of guy that doesn't usually ask easy questions, so I'm proud of myself whenever I'm able to answer one of his questions.)

The link to the actual question and answer on ServerFault is here, most of which I am about to repeat in this post, but I'll see if I can embellish a little here on this blog.


How can I use PowerShell to find AD site options like +IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED in PowerShell? I've been playing around with the following command, but can't get it to spit out anything useful.

Get-ADObject -Filter 'objectClass -eq "site"' `
-Searchbase (Get-ADRootDSE).ConfigurationNamingContext `
-Properties options


The above command is very close, but off by just a hair. The short and simple answer is that the above command is searching for the options attribute of the Site object itself, when we actually need to be looking at the NTDS Site Settings object belonging to that site. And furthermore, there is no Powershell-Friendly way of getting this data as of yet, i.e., there is no simple Get-ADSiteOptions Cmdlet... but we may just have the cure for that if you can get through the rest of this blog post. We just need to figure out where exactly to find the data, and then we can use Powershell to grab it, wherever it may be hiding.

Take the following two commands: 

repadmin commands

Repadmin /options <DC> gives us the DSA options that are specific to the domain controller being queried, such as whether the domain controller is a global catalog or not, and the Repadmin /siteoptions <DC> command gives us the options applied to the Active Directory Site to which the domain controller being queried belongs (or you can specify that you want to know the settings for another site with the /site:California parameter. Full repadmin syntax here, or just use the /experthelp parameter.)

Note that these settings are relatively advanced settings in AD, so you may not work with them on a regular basis. Sites by default have no options defined, so if you find yourself working with these options, chances are you have a more complex AD replication structure on your hands than the average Joe. If all you have are a few sites that are fully bridged/meshed, all with plenty of bandwidth, then you probably have no need to modify these settings. More importantly, if you modify any of these settings, it's very important that you document your changes, so that future administrators will know what you've done to the domain.

So where does repadmin.exe get this information?

The settings for individual domain controllers come from here: 

ADSI Edit 1

That is, the options attribute of the NTDS Settings object for each domain controller.

The site options come from the NTDS Site Settings object for each site. (Not the site object itself: ) 

Site Options

Here is the basic MSDN documentation on the Options attribute:

A bitfield, where the meaning of the bits varies from objectClass to objectClass. Can occur on Inter-Site-Transport, NTDS-Connection, NTDS-DSA, NTDS-Site-Settings, and Site-Link objects.

Now we know exactly which bits repadmin.exe works on when we issue a command such as repadmin /options DC01 +IS_GC or repadmin /siteoptions DC01 /site:Arlington +IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED. Fortunately, repadmin.exe as well as the ADSI Edit MMC snap-in both have bitmask translators in their code, so that they can show us the friendly names of the value of the options attribute, instead of just a 32-bit hexadecimal code.

If we want to roll our own Get-ADSiteOptions Cmdlet, we'll have to build our own bitmask translator too.

Fortunately the bitfields for both the DC settings and the Site settings are documented, here and here. Here is an excerpt for the Site options bitmask: 

Site Options Bitmask

So now we have enough information to start working on our Get-ADSiteOptions Cmdlet. Let's start with this basic snippet of Powershell:

ForEach($Site In (Get-ADObject -Filter 'objectClass -eq "site"' -Searchbase (Get-ADRootDSE).ConfigurationNamingContext)) 
    Get-ADObject "CN=NTDS Site Settings,$($Site.DistinguishedName)" -Properties Options 

What that does is get the DistinguishedName of every Site in the forest, iterate through them and get the attributes of each Site's NTDS Site Settings object. If the options attribute has not been set for a Site (which remember, is the default,) then it will not be shown. Only Sites with modified options will show as having an options attribute at all. Furthermore, in Powershell, it will come out looking like this:

Powershell site options

It's in decimal. 16 in decimal is 0x10 in hex, which we now know means IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED.

So, without further ado, let's see if we can build our own Get-ADSiteOptions Cmdlet:

#Require -Version 3
#Require -Module ActiveDirectory
Function Get-ADSiteOptions
    This Cmdlet gets Active Directory Site Options.
    This Cmdlet gets Active Directory Site Options.
    We can fill out the rest of this comment-based help later.
    Written by Ryan Ries, October 2013.
        Set-StrictMode -Version Latest

        # This enum comes from NtDsAPI.h in the Windows SDK.
        # Also thanks to Jason Scott for pointing it out to me.
        Add-Type -TypeDefinition @" 
                                   public enum nTDSSiteSettingsFlags {
                                   NTDSSETTINGS_OPT_IS_AUTO_TOPOLOGY_DISABLED            = 0x00000001,
                                   NTDSSETTINGS_OPT_IS_TOPL_CLEANUP_DISABLED             = 0x00000002,
                                   NTDSSETTINGS_OPT_IS_TOPL_MIN_HOPS_DISABLED            = 0x00000004,
                                   NTDSSETTINGS_OPT_IS_TOPL_DETECT_STALE_DISABLED        = 0x00000008,
                                   NTDSSETTINGS_OPT_IS_INTER_SITE_AUTO_TOPOLOGY_DISABLED = 0x00000010,
                                   NTDSSETTINGS_OPT_IS_GROUP_CACHING_ENABLED             = 0x00000020,
                                   NTDSSETTINGS_OPT_FORCE_KCC_WHISTLER_BEHAVIOR          = 0x00000040,
                                   NTDSSETTINGS_OPT_FORCE_KCC_W2K_ELECTION               = 0x00000080,
                                   NTDSSETTINGS_OPT_IS_RAND_BH_SELECTION_DISABLED        = 0x00000100,
                                   NTDSSETTINGS_OPT_IS_SCHEDULE_HASHING_ENABLED          = 0x00000200,
                                   NTDSSETTINGS_OPT_IS_REDUNDANT_SERVER_TOPOLOGY_ENABLED = 0x00000400  }
        ForEach($Site In (Get-ADObject -Filter 'objectClass -eq "site"' -Searchbase (Get-ADRootDSE).ConfigurationNamingContext)) 
            $SiteSettings = Get-ADObject "CN=NTDS Site Settings,$($Site.DistinguishedName)" -Properties Options
            If(!$SiteSettings.PSObject.Properties.Match('Options').Count -OR $SiteSettings.Options -EQ 0)
                # I went with '(none)' here to give it a more classic repadmin.exe feel.
                # You could also go with $Null, or omit the property altogether for a more modern, Powershell feel.
                [PSCustomObject]@{SiteName=$Site.Name; DistinguishedName=$Site.DistinguishedName; SiteOptions='(none)'} 
                [PSCustomObject]@{SiteName=$Site.Name; DistinguishedName=$Site.DistinguishedName; SiteOptions=[Enum]::Parse('nTDSSiteSettingsFlags', $SiteSettings.Options)}

And finally, a screenshot of the fruits of our labor - what we set out to do, which was to view AD Site options in Powershell:

Mind Your Powershell Efficiency Optimizations

by Ryan 22. September 2013 11:28

A lazy Sunday morning post!

As most would agree, Powershell is the most powerful Windows administration tool ever seen. In my opinion, you cannot continue to be a Windows admin without learning it. However, Powershell is not breaking any speed records. In fact it can be downright slow. (After all, it's called Power-shell, not Speed-shell.)

So, as developers or sysadmins or devopsapotami or anyone else who writes Powershell, I implore you to not further sully Powershell's reputation for being slow by taking the time to benchmark and optimize your script/code.

Let's look at an example.

$Numbers = @()
Measure-Command { (0 .. 9999) | ForEach-Object { $Numbers += Get-Random } }

I'm simply creating an array (of indeterminate size) and proceeding to fill it with 10,000 random numbers.  Notice the use of Measure-Command { }, which is what you want to use for seeing exactly how long things take to execute in Powershell.  The above procedure took 21.3 seconds.

So let's swap in a strongly-typed array and do the exact same thing:

[Int[]]$Numbers = New-Object Int[] 10000
Measure-Command { (0 .. 9999) | ForEach-Object { $Numbers[$_] = Get-Random } }

We can produce the exact same result, that is, a 10,000-element array full of random integers, in 0.47 seconds.

That's an approximate 45x speed improvement.

We called the Get-Random Cmdlet 10,000 times in both examples, so that is probably not our bottleneck. Using [Int[]]$Numbers = @() doesn't help either, so I don't think it's the boxing and unboxing overhead that you'd see with an ArrayList. Instead, it seems most likely that the dramatic performance difference was in using an array of fixed size, which eliminates the need to resize the array 10,000 times.

Once you've got your script working, then you should think about optimizing it. Use Measure-Command to see how long specific pieces of your script take. Powershell, and all of .NET to a larger extent, gives you a ton of flexibility in how you write your code. There is almost never just one way to accomplish something. However, with that flexibility, comes the responsibility of finding the best possible way.

Server Core Page File Management with Powershell

by Ryan 19. September 2013 19:19

A quickie for tonight.

(Heh heh...)

Microsoft is really pushing to make this command line-only, Server Core and Powershell thing happen. No more GUI. Everything needs to get done on the command line. Wooo command line. Love it.

So... how the heck do you set the size and location of the paging file(s) without a GUI? Could you do it without Googling Binging? Right now?

You will be able to if you remember this:

$PageFileSizeMB = [Math]::Truncate(((Get-WmiObject Win32_ComputerSystem).TotalPhysicalMemory + 200MB) / 1MB)
Set-CimInstance -Query "SELECT * FROM Win32_ComputerSystem" -Property @{AutomaticManagedPagefile="False"}
Set-CimInstance -Query "SELECT * FROM Win32_PageFileSetting" -Property @{InitialSize=$PageFileSizeMB; MaximumSize=$PageFileSizeMB}

The idea here is that I'm first turning off automatic page file management, and then I am setting the size of the page file manually, to be static and to be equal to the size of my RAM, plus a little extra. If you want full memory dumps in case your server crashes, you need a page file that is the size of your physical RAM plus a few extra MB.

You could have also done this with wmic.exe without using Powershell, but when given a choice between Powershell and not-Powershell, I pretty much always go Powershell.

Did I mention Powershell?

Powershell RemoteSigned and AllSigned Policies, Revisited

by Ryan 4. September 2013 12:35

A while back I talked about Powershell signing policies here.  Today I'm revisiting them.

Enabling script signing is generally a good idea for production environments. Things like script signing policies, using Windows Firewall, etc., all add up to provide a more thorough defense-in-depth strategy.

On the other hand, one negative side-effect that I've seen from enforcing Powershell signing policies is that it breaks the Best Practices Analyzers in Windows Server, since the BPAs apparently use scripts that are unsigned. Which is strange, since Microsoft is usually very good about signing everything that they release. I assume that they've since fixed that.

I'd consider the biggest obstacle to using only signed Powershell scripts to be one of discipline. But maybe that in itself is a good thing - if only the admins and engineers who are disciplined enough to put their digital signature on something are allowed to run PS scripts in production, perhaps that will cut down on the incidents of wild cowboys running ill-prepared scripts in your environment, or making a quick change on the fly to an important production script, etc. Could you imagine having to re-sign your script every time you changed a single line? That seems like the perfect way to encourage the behavior that scripts are first perfected in a lab, and only brought to production once they're fully baked and signed.

The next obstacle is getting everyone their own code signing certificate. This means you either need to spend some money getting a boat-load of certs from a public CA for all your employees, or you need to maintain your own PKI in your own Active Directory forest(s) for internal-use-only certificates.  This part alone is going to disqualify many organizations. Very rare is the organization that cares about properly signing things in their IT infrastructure.  Even rarer is the organization that actually does it, as opposed to just saying they want everything to be properly signed.  "It's just too much hassle... and now you're asking me to sign all my scripts, too?"

I also want to reinforce this point: Like UAC, Powershell script execution policies are not meant to be relied upon as a strong security measure. Microsoft does not tout them as such. They're meant to prevent you from making mistakes and doing things on accident. Things like UAC and PS script execution policies will keep honest people honest and non-administrators from tearing stuff up.  An AllSigned execution policy can also thwart unsophisticated attempts to compromise your security by preventing things such as modifying your PS profile without your knowledge to execute malicious code the next time you launch Powershell. But execution policies are no silver bullet. They are simply one more thin layer in your overall security onion.

So now let's play Devil's advocate. We already know the RemoteSigned policy should be a breeze to bypass just by clearing the Zone.Identifier alternate NTFS data stream. How do we bypass an Allsigned policy?

PS C:\> .\script.ps1
.\script.ps1 : File C:\script.ps1 cannot be loaded. The file C:\script.ps1 is not digitally signed. The script will not execute on the system.

Script won't execute?  Do your administrator's script execution policies get you down?  No problem:

  • Open Notepad.
  • Paste the following line into Notepad:
Powershell.exe -NoProfile -Command "Powershell.exe -NoProfile -EncodedCommand ([Convert]::ToBase64String([System.Text.Encoding]::Unicode.GetBytes((Get-Content %1 | %% {$_} | Out-String))))"
  • Save the file as bypass.bat.
  • Run your script by passing it as a parameter to bypass.bat.

PS C:\> .\bypass.bat .\script.ps1

... And congratulations, you just bypassed Powershell's execution policy as a non-elevated user.

So in conclusion, even after showing you how easy it is to bypass the AllSigned execution policy, I still recommend that good execution policies be enforced in production environments. Powershell execution policies are not meant to foil hackers.

  1. They're meant to be a safeguard against well-intentioned admins accidentally running scripts that cause unintended consequences.
  2. They verify the authenticity of a script. We know who wrote it and we know that it has not been altered since they signed it.
  3. They encourage good behavior by making it difficult for admins and engineers to lackadaisically run scripts that could damage a sensitive environment.

About Me

Name: Ryan Ries
Location: Texas, USA
Occupation: Systems Engineer 

I am a Windows engineer and Microsoft advocate, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.

Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST

MCITP: Enterprise Administrator


Profile for Ryan Ries at Server Fault, Q&A for system administrators




I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.