Local Admin Password Maintainer

by Ryan 18. February 2015 17:02

Active Directory is great for robust, centralized management of a large amount of I.T. assets.  But even once you have Active Directory, you're still left with that problem of what to do with local administrator accounts on all of the domain members.  You probably don't want to disable the local admin account, because you'll need it in case the computer is ever in a situation where it can't contact a domain controller.  But you don't have a good way of updating and maintaining the local Administrator password across your entire environment, either.  Everyone knows better than to use Group Policy Preferences to update the local administrator password on domain members, as it is completely unsecure.  Most other solutions involve sending the administrator passwords across the network in clear-text, require an admin to manually run some scripts or software every time that may not work well in complicated networks, and they still leave you with the same local administrator password on every machine... so if an attacker knocks over any one computer in your entire domain, he or she now has access to everything.

This is the situation Local Admin Password Maintainer seeks to alleviate.  LAPM easily integrates into your Active Directory domain and fully automates the creation of random local administrator passwords on every domain member.  The updated password is then transmitted securely to a domain controller and stored in Active Directory.  Only users who have been given the appropriate permissions (Domain Administrators and Account Operators, by default) may view any password.

The solution is comprised of two files: Install.ps1, which is the one-time install script, and LAPM.exe, an agent that will periodically (e.g., once a month,) execute on all domain members.  Please note that these two files will always be digitally signed by me.

Minimum Requirements

  • Active Directory. You need to be a member of both Domain Admins and Schema Admins to perform the install. You must perform the installation on the forest schema master.
  • Forest and domain functional levels of 2008 or better. This software relies on a feature of Active Directory (confidential attributes) that doesn't technically require any certain forest or domain functional level, but enforcing this requirement is an easy way of ensuring that all domain controllers in your forest are running a modern version of Windows.
  • I do not plan on doing any testing of either the install or the agent on Windows XP or Server 2003.  I could hypothetically make this work on XP/2003 SP1, but I don't want to.  If you're still using those operating systems, you aren't that concerned with security anyway.
  • A Public Key Infrastructure (PKI,) such as Active Directory Certificate Services, or otherwise have SSL certificates installed on your domain controllers that enable LDAP over SSL on port 636.  This is because LAPM does not allow transmission of data over the network in an unsecure manner.  It is possible to just bang out some self-signed certificates on your domain controllers, and then distribute those to your clients via Group Policy, but I do not recommend it.
  • The installer requires Powershell 4. Which means you need Powershell 4 on your schema master. Which means it needs to be 2008 R2 or greater.  I could port the install script to an older version of Powershell, but I haven't done it yet.
  • The Active Directory Powershell module. This should already be present if you've met the requirements thus far.
  • The Active Directory Web Service should be running on your DCs. This should already be present if you've met the requirements thus far.
  • LAPM.exe (the "agent") will run on anything Windows Vista/Server 2008 or better, 32 or 64 bit.  I just don't feel like porting it back to XP/2003 yet.

COPYRIGHT AND DISCLAIMER NOTICE:

Copyright ©2015 Joseph Ryan Ries. All Rights Reserved.

IN NO EVENT SHALL JOSEPH RYAN RIES (HEREINAFTER REFERRED TO AS 'THE AUTHOR') BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND/OR ITS DOCUMENTATION, EVEN IF THE AUTHOR IS ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

THE AUTHOR SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
THE SOFTWARE AND ACCOMPANYING DOCUMENTATION, IF ANY, PROVIDED HEREUNDER IS PROVIDED "AS IS". THE AUTHOR HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.


Installation Instructions

  • Download the installation package found below, and unzip it anywhere on your Active Directory domain controller that holds the Schema Master FSMO role.  (Use the netdom query fsmo command if you forgot which DC is your Schema Master.)
  • If necessary, use the Unblock-File Powershell cmdlet or use the GUI to unblock the downloaded zip file.
  • You can verify the integrity of the downloaded files like so:


  • If you need to change your Powershell execution policy in order to run scripts on your DC, do so now with Set-ExecutionPolicy RemoteSigned.
  • Execute the Install script by typing .\Install.ps1 in the same directory as the script and LAPM.exe.

  • The installation script will perform several prerequisite checks to ensure your Active Directory forest and environment meet the criteria. It will also create a log file that stores a record of everything that takes place during this install session.  If you see any red [ERROR] text, read the error message and try to correct the problem that is preventing the install script from continuing, then try again. (E.g. SSL certificate not trusted, you're not on the Schema Master, etc.)  It's important that you read and consider the warning text, especially the part about how extending the Active Directory schema is a permanent operation.
  • Type yes at the warning prompt to commit to the installation.

  • The installation will now make a small schema modification by adding the LAPMLocalAdminPassword attribute to the Active Directory schema, adding that attribute to the computer object, and then adding an access control entry (ACE) to the root of the domain that allows the SELF principal the ability to write to that attribute.  That means that a computer has the right to modify its own LAPMLocalAdminPassword attribute, but not the attribute of another computer. (A computer does not have the ability to read its own LAPMLocalAdminPassword attribute. It is write-only.)

  • Finally, the install script copies LAPM.exe to the domain's SYSVOL share. This is so all domain members will be able to access it.
  • You are now done with the script and are in the post-installation phase.  You have one small thing left to do.
  • Open Group Policy Management on your domain controller.

  • Create a new GPO and link it to the domain:

  • Name the new GPO Local Admin Password Maintainer.
  • Right click on the new GPO and choose Edit. This will open the GPO editor.
  • Navigate to Computer Configuration > Preferences > Control Panel Settings > Scheduled Tasks.

  • Right-click in the empty area and choose New > Scheduled Task (At least Windows 7).

  • Choose these settings for the new scheduled task. It is very important that the scheduled task be run as NT Authority\System, also known as Local System.


  • This task will be triggered on the first of every month.  It's advisable to configure the random delay shown in the screenshot above, as this will mitigate the flood of new password uploads to your domain controllers on the first of the month.

  • For the program to execute, point to \\YourDomain\SYSVOL\YourDomain\LAPM.exe. Remember that the second "YourDomain" in the path is a reparse point/symlink that looks like "domain" if you view it in File Explorer.  For the optional argument, type BEGIN_MAGIC, in all capital letters.  It is case sensitive.
  • Lastly, the "Remove this item when it is no longer applied" setting is useful.  Unchecking "allow this task to be run on demand" can also be useful.  As an administrator, you have some leeway here to do what makes the most sense for your environment.  You might even choose to scope this GPO to only a certain OU if you only want a subset of the members of your domain to participate in Local Admin Account Maintainer.

  • Click OK to confirm, and you should now have a new scheduled task that will execute on all domain members.
  • Close the Group Policy editor.

Don't worry if the scheduled task also applies to domain controllers.  LAPM.exe detects whether it is running on a domain controller before it does anything, and exits if it is.


It also doesn't matter what the local administrator's name is, in case the account has been renamed. LAPM uses the SID.

LAPM logs successes and failures to the Windows Application event log.  Here is an example of what you might see if a client can't connect to a DC for some reason, like if SSL certificates aren't configured correctly:

In an event like this, LAPM.exe exits before changing the local administrator password, so the password will just stay what it was until the next time the scheduled job runs.

LAPM will generate a random, 16-character long password.  The "randomness" comes from the cryptographically secure PRNG supplied by the Windows API.

Success looks like this:


Now, notice that the standard domain user "Smacky the Frog" is unable to read the LAPMLocalAdminPassword attribute from Active Directory:

However, a Domain Administrator or Account Operator can!

Of course, you can also see it in the GUI as well, with Active Directory Users and Computers with advanced view turned on, for example.

So there you have it. Be smart, test it out in a lab first, and then enjoy your 30-day, random rotating local admin passwords!

As I continue to update this software package, new versions will be published on this page.

Download:

LAPM-1.0.zip (54.4KB)


Configuring HP ILO Settings and TLS Certificates With Powershell

by Ryan 6. February 2015 17:02

I've been configuring HP ILOs lately. And of course, the cardinal rule in I.T. is that if you're going to do something more than once, then you must start automating it.  And of course, if you want to automate something, then you fire up Powershell.

Luckily, HP is playing ball with the HP Scripting Tools for Windows Powershell. The cmdlets are not half bad, either. Essentially, what I needed to do was configure a bunch of ILOs, including renaming them, setting some IPv6 settings, and putting valid SSL/TLS certificates on them.

First, let's save the ILOs address (or hostname,)  username and password for future use:

[String]$Device   = '10.1.2.3'
[String]$Username = 'Admin'
[String]$Password = 'P@ssword'

Next, let's turn off IPv6 SLAAC and DHCPv6 (for ILO 3s, firmware ~1.28 or so, and above):

Set-HPiLOIPv6NetworkSetting -Server $Device `
                            -Username $Username `
                            -Password $Password `
                            -AddressAutoCfg Disable

Set-HPiLOIPv6NetworkSetting -Server $Device `
                            -Username $Username `
                            -Password $Password `
                            -DHCPv6Stateless Disable

Set-HPiLOIPv6NetworkSetting -Server $Device `
                            -Username $Username `
                            -Password $Password `
                            -DHCPv6Stateful Disable

Next I wanted to set the FQDN to what I wanted it to be... it was important that I turned DHCP off first, because the ILO wanted to set the domain name using DHCP and thus locked it from being edited, even though no DHCP server was actually on the network:

Set-HPiLOServerName -Server $Device `
                    -Username $Username `
                    -Password $Password `
                    -ServerName 'server1-ilo.contoso.com'

Now I wanted to put a valid SSL/TLS certificate on the ILO. So, I needed to first generate a Certificate Signing Request (CSR) on the ILO:

Get-HPiLOCertificateSigningRequest -Server $Device `
                                   -Username $Username `
                                   -Password $Password

IP                          : 10.1.2.3
HOSTNAME                    : server1-ilo.contoso.com
STATUS_TYPE                 : OK
STATUS_MESSAGE              : OK
CERTIFICATE_SIGNING_REQUEST : -----BEGIN CERTIFICATE REQUEST-----
                              a1b2c3d4e5A0B1C2D3F4E5
                              3ba43+/evnokaDvzG9nbs3
                              a1b2c3d4e5A0B1C2D3F4E=
                              -----END CERTIFICATE REQUEST-----

Nice... now copy and paste the entire CSR text block, including the -----BEGIN and END------ bits, and submit that your certificate authority.  Then, the administrator of the certificate authority has to approve the request.

This is the one piece where automation breaks down, in my opinion, and some manual intervention is necessary. This is not a technical limitation, though... it's by design.  The idea is that the entire basis of SSL/TLS public key cryptography is that it's based on trust.  And that trust has to come from other sources such as the Certificate Authority administrator phoning the requestor and verifying that it was actually them making the request, or getting some additional HR info, or whatever.  If, at the end of the day, there was no extraordinary measure taken to really verify the requestor's identity, then you can't really trust these certificates.

Anyway, once the CA has signed your CSR, you just need to import the signed certificate back into the ILO:

Import-HPiLOCertificate -Server $Device `
                        -Username $Username `
                        -Password $Password `
                        -Certificate (Get-Content C:\mycert.cer -Raw) 

Assuming no errors were returned, then you're done and your HP ILO will now reboot, and when it comes back up, will be using a valid SSL certificate.

Also, HP ILOs cannot read certificates if they are using PKCS #1 v2.1 format. Add that to the huge pile of devices that cannot read an X509 standard that came out in 2003.

New-DepthGaugeFile.ps1: The Powershell Pipeline Is Neat, But It's Also Slow

by Ryan 20. December 2014 14:12

If you know me or this blog at all, you know that first and foremost, I think Powershell is awesome.  It is essential to any Windows system administrator or I.T. pro's success.  The good news is that there are a dozen ways to accomplish any given task in Powershell.  The bad news is that eleven of those twelve techniques are typically as slow as a three-legged tortoise swimming through a vat of cold Aunt Jemima.

This is not the first, or the second, blog post I've made about Powershell performance pitfalls.  One of the fundamental problems with super-high-level languages such as Powershell, C#, Java, etc., is that they take raw performance away from you and give you abstractions in return, and you then have to fight the language in order to get your performance back.

Today I ran across another such example.

I'm creating a program that reads from files.  I need to generate a file that has "markers" in it that tell me where I am within that file at a glance.  I'll call this a "depth gauge."  I figured Powershell would be a simple way to create such a file.  Here is an example of what I'm talking about:

Depth Gauge or Yardstick File


The idea being that I'd be able to tell my program "show me what's at byte 0xFFFF of file.txt," and I'd be able to easily visually verify the answer because of the byte markers in the text file.  The random characters after the byte markers are just gibberish to take up space.  In the above example, each line takes up exactly 64 bytes - 62 printable characters plus \r\n.  (In ASCII.)

I reach for Powershell whenever I want to whip up something in 5 minutes that accomplishes a simple task.  And voila:

Function New-DepthGaugeFile
{
    [CmdletBinding()]
    Param([Parameter(Mandatory=$True)]
          [ValidateScript({Test-Path $_ -IsValid})]
          [String]$FilePath, 
          [Parameter(Mandatory=$True)]
          [Int64]$DesiredSize, 
          [Parameter(Mandatory=$True)]
          [ValidateRange(20, [Int32]::MaxValue)]
          [Int32]$BytesPerLine)
    Set-StrictMode -Version Latest
    [Int64]$CurrentPosition = 0
    $ValidChars = @('a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z',
                    'A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z',
                    '0','1','2','3','4','5','6','7','8','9',
                    '!','@','#','$','%','^','&','*','(',')','_','+','-','=','?','<','>','.','[',']',';','`','{','}','\','/')
    
    Try
    {
        New-Item -Path $FilePath -ItemType File -Force -ErrorAction Stop | Out-Null
    }
    Catch
    {
        Write-Error $_
        Return
    }    
    
    [Int32]$BufferMaxLines = 64
    [Int32]$BufferMaxBytes = $BufferMaxLines * $BytesPerLine

    If ($DesiredSize % $BytesPerLine -NE 0)
    {
        Write-Warning 'BytesPerLine is not a multiple of DesiredSize.'
    }


    $LineBuffer = New-Object 'System.Collections.Generic.List[System.String]'
        
    While ($DesiredSize -GT $CurrentPosition)
    {
        [System.Text.StringBuilder]$Line = New-Object System.Text.StringBuilder
        
        # 17 bytes
        [Void]$Line.Append($("{0:X16} " -F $CurrentPosition))
        
        # X bytes
        1..$($BytesPerLine - 19) | ForEach-Object { [Void]$Line.Append($(Get-Random -InputObject $ValidChars)) }        
        
        # +2 bytes (`r`n)        

        [Void]$LineBuffer.Add($Line.ToString())
        $CurrentPosition += $BytesPerLine

        # If we're getting close to the end of the file, we'll go line by line.
        If ($CurrentPosition -GE ($DesiredSize - $BufferMaxBytes))
        {
            Add-Content $FilePath -Value $LineBuffer -Encoding Ascii
            [Void]$LineBuffer.Clear()
        }

        # If the buffer's full, and we still have more than a full buffer's worth left to write, then dump the
        # full buffer into the file now.
        If (($LineBuffer.Count -GE $BufferMaxLines) -And ($CurrentPosition -LT ($DesiredSize - $BufferMaxBytes)))
        {
            Add-Content $FilePath -Value $LineBuffer -Encoding Ascii
            [Void]$LineBuffer.Clear()
        }
    }
}

Now I can create a dummy file of any size and dimension with a command like this:

New-DepthGaugeFile -FilePath 'C:\testfolder1\largefile2.log' `
                   -DesiredSize 128KB -BytesPerLine 64

I thought I was being really clever by creating an internal "buffering" system, since I instinctively knew that performing a file write operation (Add-Content) on each and every loop iteration would slow me down.  I also knew from past experience that overuse of "array arithmetic" like $Array += $Element would slow me down because of the constant cloning and resizing of the array.  I also remembered that in .NET, strongly-typed lists are faster than ArrayLists because we want to avoid boxing and unboxing.

Despite all these little optimizations, here is the performance while writing a 1MB file:

Measure-Command { New-DepthGaugeFile -FilePath C:\testfolder1\largefile.log `
                                     -DesiredSize 1MB -BytesPerLine 128 }

TotalSeconds    : 103.8428624

Over 100 seconds to generate 1 megabyte of data.  I'm running on an SSD that is capable of copying hundreds of megabytes per second of data, so the storage is not the bottleneck.

To try to speed things up, I decided to focus on the line that appears to be doing the most amount of work:

1..$($BytesPerLine - 19) | ForEach-Object { 
             [Void]$Line.Append($(Get-Random -InputObject $ValidChars)) }

The idea behind this line of code is that we add some random characters to each line.  If we want each line to take up 64 characters, then we would add (64 - 19) characters to the line, because the byte marker at the beginning of the line, plus a space character, takes up 17 bytes.  Then then the newline and carriage return takes up 2 bytes.

My first instinct was that the Get-Random action was taking all the CPU cycles.  So I replaced it with static characters... and it made virtually no difference.  Maybe it's the pipe and Foreach-Object?  Let's change it to this:

For ($X = 0; $X -LT ($BytesPerLine - 19); $X++)
{
    [Void]$Line.Append($(Get-Random -InputObject $ValidChars))
}

And now the results:

Measure-Command { New-DepthGaugeFile -FilePath C:\testfolder1\largefile.log `
                                     -DesiredSize 1MB -BytesPerLine 128 }

TotalSeconds    : 61.0464638

Exact same output, almost twice as fast.

Certificate Store Backup and Cleanup (With a Little Powershell)

by Ryan 17. December 2014 15:12

I was thinking about HTTPS packet inspection the other day.  The type of HTTPS packet inspection that could be performed with a product such as Forefront Threat Management Gateway, for instance.  Basically, you start by funneling everyone's network traffic through your gateway.  Second, you install an SSL certificate into the Trusted Root CA certificate store on all of the client computers whose encrypted traffic you wish to inspect. Now, your gateway is ready to act as a "man-in-the-middle" and decrypt everyone's outbound traffic.  Traffic to their personal email accounts on Gmail and Hotmail... traffic to their online banking websites... transparently, without their knowledge.

I.T. departments do this to their employees all the time.  But I wonder... if it's so easy for I.T. departments, what would stop something like a government agency from installing this same sort of gateway in an ISP datacenter, and sniffing everyone's HTTPS traffic?

If only the government had some way of getting a trusted CA certificate into the cert store on everyone's computer...


So anyway, this thought led me to think about cleaning up my own certificate stores on my personal machines.  We trust so many certificates by default, and we don't really know what they all are or where they came from.  Most of us who use Windows just rely on Microsoft's Windows Root Certificate Program to tell us which root CAs we should trust by default.

So first, we need to turn off Automatic Root Certificates Update via Group Policy (if you administer an Active Directory domain) or via local security policy if you are on a standalone PC:

Computer Configuration > Administrative Templates > System > Internet Communication settings > Turn off Automatic Root Certificates Update


If you leave this setting turned on, Windows will use the internet to re-download and replace any root CA certificates that it thinks you should trust.  So you'll delete them, and they'll just reappear.

Secondly, there are many certificates in your certificate stores for a reason.  Many of them are there for verifying signed code, such as kernel drivers, for example. If Windows cannot verify their digital signatures, they won't load. Windows might not even boot properly.  Nevertheless, Microsoft continues to ship some very old certificates with Windows, such as these:


They're there for "backwards compatibility," of course.  I decided I'd take the risk that I didn't need certificates from the 20th century anymore, and figured I would delete them.

Then there's this guy:


On the list of organizations that engender warm, fuzzy feelings of implicit trust, this one is pretty much at the rock bottom of that list... right above the Nigerian pirate running a CA on his laptop.  Nevertheless, this is one of those root certificate authority certs that is protected and automatically distributed by the Windows Root CA Program.  But since we've disabled the automatic certificate update, and I don't feel like I should be compelled to trust this certificate authority, it's time to delete it.

But, one last thing before we delete the certificates.  Let's make a backup of all of our certificate stores, so that if we accidentally delete a certificate that's required for something important, we can restore it.  It took about 15 minutes of Powershell to write the backup script.  With another 5 minutes, you could sprinkle a little decoration on it and make a cmdlet out of it.

# Backup-CertificateStores.ps1
Set-StrictMode -Version Latest
[String]$CertBackupLocation = "C:\Users\Ryan\Documents\CertStoreBackup_$([Environment]::MachineName)\"

If (-Not (Test-Path $CertBackupLocation))
    { New-Item $CertBackupLocation -ItemType Directory | Out-Null }

$AllCerts = Get-ChildItem Cert:\ -Recurse | Where PSIsContainer -EQ $False

Foreach ($Cert In $AllCerts)
{
    [String]$StoreLocation = ($Cert.PSParentPath -Split '::')[-1]    
    If (-Not (Test-Path (Join-Path $CertBackupLocation $StoreLocation)))
        { New-Item (Join-Path $CertBackupLocation $StoreLocation) -ItemType Directory | Out-Null }

    If (-Not $Cert.HasPrivateKey -And -Not $Cert.Archived)
    {
        Export-Certificate -Cert $Cert -FilePath ([String](Join-Path (Join-Path -Path $CertBackupLocation $StoreLocation) $Cert.Thumbprint) + '.crt') -Force
    }
}

Now you have backups of all your public certificates (this doesn't back up your private certs or keys,) so delete whichever ones you feel are unnecessary.

One Way of Exporting Nicer CSVs with Powershell

by Ryan 16. December 2014 11:12

One of the ever-present conundrums in working with computers is that data that looks good and easily readable to a human, and data that is easy and efficient for a computer to process, are never the same.

In Powershell, you see this "immutable rule" manifest itself in that, despite all the various Format-* cmdlets available to you, some data will just never look good in the console.  And if it looks good in the console, chances are you've mangled the objects so that they've become useless for further processing over the pipeline.  This is essentially one of the Powershell "Gotcha's" espoused by Don Jones, a term that he refers to as "Format Right."  The principal is that if you are going to format your Powershell output with a Format-* cmdlet, you should always do so at the end of the statement (e.g., on the right side.)  The formatting should be the last thing you do in an expression, and you should never try to pass something that has been formatted over the pipeline.

CSV files, in my opinion, are a kind of happy medium, because they are somewhat easy for humans to read (especially if the human has an application like Microsoft Excel or some such,) and CSV files are also relatively easy for computers to read and process.  Therefore, CSVs are a popular format for transporting data and feeding it to computers, while still being legible to humans.

When you use Export-Csv to write a bunch of objects out to a CSV file:

# Get Active Directory groups, their members, and memberships:
Get-ADGroup -Filter * -SearchBase 'CN=Users,DC=domain,DC=local' -Properties Members,MemberOf | `
    Select Name, Members, MemberOf | `
    Export-Csv -NoTypeInformation -Path C:\Users\ryan\Desktop\test.csv 

And those objects contain arrays or lists as properties, you'll get something like this in your CSV file:

"Name","Members","MemberOf"
"MyGroup","Microsoft.ActiveDirectory.Management.ADPropertyValueCollection","Microsoft.ActiveDirectory.Management.ADPropertyValueCollection"

Uh... that is not useful at all.  What's happened is that instead of outputting the contents of the Active Directory group members and memberOf attributes, which are collections/arrays, Powershell has instead output only the names of the .NET types of those collections.

What we need is a way to expand those lists so that they'll go nicely into a CSV file.  So I usually do something like the script excerpt below.  This is just one possible way of doing it; I by no means claim that it's the best way or the only way.

#Get all the AD groups:
$Groups = Get-ADGroup -Filter * -SearchBase 'OU=MyOU,DC=domain,DC=com' -Properties Members,MemberOf

#Create/initialize an empty collection that will contain a collection of objects:
$CSVReadyGroups = @()

#Iterate through each one of the groups:
Foreach ($Group In $Groups)
{
    #Create a new object to hold our "CSV-Ready" version of the group:
    $CSVReadyGroup = New-Object System.Object #Should probably be a PSObject
    #Add some properties to the object.
    $CSVReadyGroup | Add-Member -Type NoteProperty -Name 'Name'     -Value  $Group.Name
    $CSVReadyGroup | Add-Member -Type NoteProperty -Name 'Members'  -Value  $Null
    $CSVReadyGroup | Add-Member -Type NoteProperty -Name 'MemberOf' -Value  $Null

    # If the group has any members, then run the code inside these brackets:
    If ($Group.Members)
    {
        # Poor-man's serialization.
        # We are going to convert the array into a string, with NewLine characters 
        # separating each group member. Could also be more concise just to cast
        # as [String] and do  ($Group.Members -Join [Environment]::NewLine)

        $MembersString = $Null
        Foreach ($GroupMember In $Group.Members)
        {
            $MembersString += $GroupMember + [Environment]::NewLine
        }
        #Trim the one extra newline on the end:
        $MembersString = $MembersString.TrimEnd([Environment]::NewLine)
        #Add to our "CSV-Ready" group object:
        $CSVReadyGroup.Members = $MembersString
    }

    # If the group is a member of any other groups, 
    # then do what we just did for the Members:
    If ($Group.MemberOf)
    {
        $MemberOfString = $Null
        Foreach ($Membership In $Group.MemberOf)
        {
            $MemberOfString += $Membership + [Environment]::NewLine
        }
        $MemberOfString = $MemberOfString.TrimEnd([Environment]::NewLine)
        $CSVReadyGroup.MemberOf = $MemberOfString
    }

    #Add the object we've created to the collection:
    $CSVReadyGroups += $CSVReadyGroup
}

#Output our collection:
$CSVReadyGroups | Export-Csv -NoTypeInformation -Path C:\Users\ryan\Desktop\test.csv

Now you will have a CSV file that has readable arrays in it, that looks good when you open it with an application such as Excel.

About Me

Ryan Ries
Texas, USA
Systems Engineer
ryan@myotherpcisacloud.com

I am a systems engineer with a focus on Microsoft tech, but I can run with pretty much any system that uses electricity.  I'm all about getting closer to the cutting edge of technology while using the right tool for the job.

This blog is about exploring IT and documenting the journey.


Blog Posts (or Vids) You Must Read (or See):

Pushing the Limits of Windows by Mark Russinovich
Mysteries of Windows Memory Management by Mark Russinovich
Accelerating Your IT Career by Ned Pyle
Post-Graduate AD Studies by Ned Pyle
MCM: Active Directory Series by PFE Platforms Team
Encodings And Character Sets by David C. Zentgraf
Active Directory Maximum Limits by Microsoft
How Kerberos Works in AD by Microsoft
How Active Directory Replication Topology Works by Microsoft
Hardcore Debugging by Andrew Richards
The NIST Definition of Cloud by NIST



MCITP: Enterprise Administrator

VCP5-DCV

Profile for Ryan Ries at Server Fault, Q&A for system administrators

LOPSA

GitHub: github.com/ryanries

 

I do not discuss my employers on this blog and all opinions expressed are mine and do not reflect the opinions of my employers.