Complex Azure Template Odyssey Part Two: Domain Controller

Posted by Rik Hepworth on Sunday, August 30, 2015

This series includes the following posts:

  1. Complex Azure Template Odyssey Part One: The Environment
  2. Complex Azure Odyssey Part Four: WAP Server
  3. Complex Azure Template Odyssey Part Three: ADFS Server
  4. Complex Azure Template Odyssey Part Two: Domain Controller

In part one of this series of posts I talked about the project driving my creation of these Azure Resource Templates, the structure of the template and what resource I was deploying. This post will go through the deployment and configuration of the first VM which will become my domain controller and certificate server. In order to achieve my goals I need to deploy the VM, the DSC extension and finally the custom script extension to perform actions that current DSC modules can’t. I’ll show you the template code, the DSC code and the final scripts and talk about the gotchas I encountered on the way.

Further posts will detail the ADFS and WAP server deployments.

The Template

I’ve already talked about how I’ve structured this project: A core template calls a collection of nested templates – one per VM. The DC template differs from the rest in that it too calls a nested deployment to make changes to my virtual network. Other than that, it follows the same convention.

dc template json view
dc template json view

The screenshot above is the JSON outline view of the template. Each of my nested VM templates follows the same pattern: The parameters block in each template is exactly the same. I’m using a standard convention for naming all my resources, so providing I pass the envPrefix parameter between each one I can calculate the name of any resource in the project. That’s important, as we’ll see in a moment. The variables block contains all the variables that the current template needs – things like the IP address that should be assigned or the image we use as our base for the VM. Finally, the resources section holds the items we are deploying to create the domain controller. This VM is isolated from the outside world so we need the VM itself and a NIC to connect it to our virtual network, nothing more. The network is created by the core template before it calls the DC template.

The nested deployment needs explaining. Once we’ve created our domain controller we need to make sure that all our other VMs receive the correct IP address for their DNS. In order to do that we have to reconfigure the virtual network that we have already deployed. The nested deployment here is an artefact of the original approach with a single template – it could actually be fully contained in the DC template.

To explain: We can only define a resource with a given type and name in a template once. Templates are declarative and describe how we want a resource to be configured. With our virtual network we want to reconfigure it after we have deployed subsequent resources. If we describe the network for a second time, the new configuration is applied to our existing resource. The problem is that we have already got a resource in our template for our network. We get around the problem by calling a nested deployment. That deployment is a copy of the network configuration, with the differences we need for our reconfiguration. In my original template which contained all the resources, that nested deployment depended on the DC being deployed and was then called. It had to be a nested deployment because the network was already in there once.

With my new model I could actually just include the contents of the network reconfiguration deployment directly in the DC template. I am still calling the nested resource simply because of the way I split my original template. The end result is the same. The VM gets created, then the DSC and script extensions run to turn it into a domain controller. The network template is then called to set the DNS IP configuration of the network to be the IP address of the newly-minted DC.

{
    "name": "tuServUpdateVnet",
    "type": "Microsoft.Resources/deployments",
    "apiVersion": "2015-01-01",
    "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/dcScript')]"
    ],
    "properties": {
        "mode": "Incremental",
        "templateLink": {
            "uri": "[concat(variables('updateVNetDNSTemplateURL'), parameters('_artifactsLocationSasToken'))]",
            "contentVersion": "1.0.0.0"
        },
        "parameters": {
            "resourceLocation": {
                "value": "[parameters('resourceLocation')]"
            },
            "virtualNetworkName": {
                "value": "[variables('virtualNetworkName')]"
            },
            "virtualNetworkPrefix": {
                "value": "[variables('virtualNetworkPrefix')]"
            },
            "virtualNetworkSubnet1Name": {
                "value": "[variables('virtualNetworkSubnet1Name')]"
            },
            "virtualNetworkSubnet1Prefix": {
                "value": "[variables('virtualNetworkSubnet1Prefix')]"
            },
            "virtualNetworkDNS": {
                "value": [
                    "[variables('vmDCIPAddress')]"
                ]
            }
        }
    }
}

The code above is contained in my DC template. It calls the nested deployment through a URI to the template. That points to an azure storage container with all the resources for my deployment held in it. The template is called with a set of parameters that are mostly variables created in the DC template in accordance with the rules and patterns I’ve set. Everything is the same as the original network deployment with the exception of the DNS address which is to be set to the DC address. Below is the network template. Note that the parameter block defines parameters that match those being passed in. All names are case sensitive.

{
    "$schema": "http://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "resourceLocation": {
            "type": "string",
            "defaultValue": "West US",
            "allowedValues": [
                "East US",
                "West US",
                "West Europe",
                "North Europe",
                "East Asia",
                "South East Asia"
            ],
            "metadata": {
                "description": "The region to deploy the storage resources into"
            }
        },
        "virtualNetworkName": {
            "type": "string"
        },
        "virtualNetworkDNS": {
            "type": "array"
        },
        "virtualNetworkPrefix": {
            "type": "string"
        },
        "virtualNetworkSubnet1Name": {
            "type": "string"
        },
        "virtualNetworkSubnet1Prefix": {
            "type": "string"
        }
    },
    "variables": {},
    "resources": [
        {
            "name": "[parameters('virtualNetworkName')]",
            "type": "Microsoft.Network/virtualNetworks",
            "location": "[parameters('resourceLocation')]",
            "apiVersion": "2015-05-01-preview",
            "tags": {
                "displayName": "virtualNetworkUpdate"
            },
            "properties": {
                "addressSpace": {
                    "addressPrefixes": [
                        "[parameters('virtualNetworkPrefix')]"
                    ]
                },
                "dhcpOptions": {
                    "dnsServers": "[parameters('virtualNetworkDNS')]"
                },
                "subnets": [
                    {
                        "name": "[parameters('virtualNetworkSubnet1Name')]",
                        "properties": {
                            "addressPrefix": "[parameters('virtualNetworkSubnet1Prefix')]"
                        }
                    }
                ]
            }
        }
    ],
    "outputs": {}
}

The VM itself is pretty straightforward. The code below deploys a virtual NIC and then the VM. The NIC needs to be created first and is then bound to the VM when the latter is deployed. The snippet has the nested resources for the VM extensions removed. I’ll show you those in a bit.

{
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [],
    "location": "[parameters('resourceLocation')]",
    "name": "[variables('vmDCNicName')]",
    "properties": {
        "ipConfigurations": [
            {
                "name": "ipconfig1",
                "properties": {
                    "privateIPAllocationMethod": "Static",
                    "privateIPAddress": "[variables('vmDCIPAddress')]",
                    "subnet": {
                        "id": "[variables('vmDCSubnetRef')]"
                    }
                }
            }
        ]
    },
    "tags": {
        "displayName": "vmDCNic"
    },
    "type": "Microsoft.Network/networkInterfaces"
},
{
    "name": "[variables('vmDCName')]",
    "type": "Microsoft.Compute/virtualMachines",
    "location": "[parameters('resourceLocation')]",
    "apiVersion": "2015-05-01-preview",
    "dependsOn": [
        "[concat('Microsoft.Network/networkInterfaces/', variables('vmDCNicName'))]"
    ],
    "tags": {
        "displayName": "vmDC"
    },
    "properties": {
        "hardwareProfile": {
            "vmSize": "[variables('vmDCVmSize')]"
        },
        "osProfile": {
            "computername": "[variables('vmDCName')]",
            "adminUsername": "[parameters('adminUsername')]",
            "adminPassword": "[parameters('adminPassword')]"
        },
        "storageProfile": {
            "imageReference": {
                "publisher": "[variables('windowsImagePublisher')]",
                "offer": "[variables('windowsImageOffer')]",
                "sku": "[variables('windowsImageSKU')]",
                "version": "latest"
            },
            "osDisk": {
                "name": "[concat(variables('vmDCName'), '-os-disk')]",
                "vhd": {
                    "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'), '/', variables('vmDCName'), 'os.vhd')]"
                },
                "caching": "ReadWrite",
                "createOption": "FromImage"
            },
            "dataDisks": [
                {
                    "vhd": {
                        "uri": "[concat('http://', variables('storageAccountName'), '.blob.core.windows.net/', variables('vmStorageAccountContainerName'),'/', variables('vmDCName'),'data-1.vhd')]"
                    },
                    "name": "[concat(variables('vmDCName'),'datadisk1')]",
                    "createOption": "empty",
                    "caching": "None",
                    "diskSizeGB": "[variables('windowsDiskSize')]",
                    "lun": 0
                }
            ]
        },
        "networkProfile": {
            "networkInterfaces": [
                {
                    "id": "[resourceId('Microsoft.Network/networkInterfaces', variables('vmDCNicName'))]"
                }
            ]
        }
    },
    "resources": []
}
```json

The NIC is pretty simple. I tell it the name of the subnet on my network I want it to connect to and I tell it that I want to use a static private IP address, and what that address is. The VM resource then references the NIC in the _networkProfile_ section.

The VM itself is built using the Windows Server 2012 R2 Datacentre image provided by Microsoft. That is specified in the _imageReference_ section. There are lots of VM images and each is reference by publisher (in this case _MicrosoftWindowsServer_), offer (_WindowsServer_) and SKU (_2012-R2-Datacenter_). I’m specifying ‘latest’ as the version but you can be specific if you have built your deployment around a specific version of an image. They are updated regularly to include patches… There are a wide range of images available to save you time. My full deployment makes use of a SQL Server image and I’m also playing with a BizTalk image right now. It’s much easier than trying to sort out the install of products yourself, and the licence cost of the software gets rolled into the VM charge.

We need to add a second disk to our VM to hold the domain databases. The primary disk on a VM has read and write caching enabled. Write caching exposes us to risk of corrupting our domain database in the event of a failure, so I’m adding a second disk and setting the caching on that to none. It’s all standard stuff at this point.

I’m not going to describe the _IaaSDiagnostics_ extension. The markup for that is completely default as provided by the tooling when you add the resource. Let’s move on to the DSC extension.

```json
{
    "type": "Microsoft.Compute/virtualMachines/extensions",
    "name": "[concat(variables('vmDCName'),'/InstallDomainController')]",
    "apiVersion": "2015-05-01-preview",
    "location": "[parameters('resourceLocation')]",
    "dependsOn": [
        "[resourceId('Microsoft.Compute/virtualMachines', variables('vmDCName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/IaaSDiagnostics')]"
    ],
    "properties": {
        "publisher": "Microsoft.Powershell",
        "type": "DSC",
        "typeHandlerVersion": "1.7",
        "settings": {
            "modulesURL": "[concat(variables('vmDSCmoduleUrl'), parameters('_artifactsLocationSasToken'))]",
            "configurationFunction": "[variables('vmDCConfigurationFunction')]",
            "properties": {
                "domainName": "[variables('domainName')]",
                "adminCreds": {
                    "userName": "[parameters('adminUsername')]",
                    "password": "PrivateSettingsRef:adminPassword"
                }
            }
        },
        "protectedSettings": {
            "items": {
                "adminPassword": "[parameters('adminPassword')]"
            }
        }
    }
}

I should mention at this point that I am nesting the extensions within the VM resources section. You don’t need to do this – they can be resources at the same level as the VM. However, my experience from deploying this lot a gazillion times is that if I nest the extensions I get a more robust deployment. Pulling them out of the VM appears to increase the chance of the extension failing to deploy.

The DSC extension will do different things depending on the OS version of Windows you are using. For my 2012 R2 VM it will install the necessary required software to use Desired State Configuration and it will then reboot the VM before applying any config. On the current Server 2016 preview images that installation and reboot isn’t needed as the pre-reqs are already installed.

The DSC extension needs to copy your DSC modules and configuration onto the VM. That’s specified in the modulesURL setting and it expects a zip archive with your stuff in it. I’ll show you that when we look at the DSC config in detail later. The configurationFunction setting specifies the PowerShell file that contains the function and the name of the configuration in that file to use. I have all the DSC configs in one file so I pass in DSCvmConfigs.ps1\\DomainController (note the escaped slash).

Finally, we specify the parameters that we want to pass into our PowerShell DSC function. We’re specifying the name of our Domain and the credentials for our admin account.

Once the DSC module has completed I need to do final configuration with standard PowerShell scripts. The customScript Extension is our friend here. Documentation on this is somewhat sparse and I’ve already blogged on the subject to help you. The template code is below:

{
    "type": "Microsoft.Compute/virtualMachines/extensions",
    "name": "[concat(variables('vmDCName'),'/dcScript')]",
    "apiVersion": "2015-05-01-preview",
    "location": "[parameters('resourceLocation')]",
    "dependsOn": [
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'))]",
        "[concat('Microsoft.Compute/virtualMachines/', variables('vmDCName'),'/extensions/InstallDomainController')]"
    ],
    "properties": {
        "publisher": "Microsoft.Compute",
        "type": "CustomScriptExtension",
        "typeHandlerVersion": "1.4",
        "settings": {
            "fileUris": [
                "[concat(parameters('_artifactsLocation'),'/DomainController.ps1', parameters('_artifactsLocationSasToken'))]",
                "[concat(parameters('_artifactsLocation'),'/PSPKI.zip', parameters('_artifactsLocationSasToken'))]",
                "[concat(parameters('_artifactsLocation'),'/tuServDeployFunctions.ps1', parameters('_artifactsLocationSasToken'))]"
            ],
            "commandToExecute": "[concat('powershell.exe -file DomainController.ps1',' -vmAdminUsername ',parameters('adminUsername'),' -vmAdminPassword ',parameters('adminPassword'),' -fsServiceName ',variables('vmWAPpublicipDnsName'),' -tsServiceName ',variables('vmTWAPpublicipDnsName'), ' -resourceLocation "\', parameters('resourceLocation'),'\"')]"
        }
    }
}

The module downloads the files I need which in this case is a zip containing the PSPKI PowerShell modules that I reference to perform a bunch of certificate functions, a module of my own functions and finally the DomainController.ps1 script that is executed by the extension. You can’t specify parameters for your script in the extension (and in fact you can’t call the script directly – you have to execute the powershell.exe command yourself) so you can see that I build the commandToExecute from using a bunch of variables and string concatenation.

The DSC Modules

I need to get the DSC modules I use onto the VM. To save my going mad, that means I include the module source in the Visual Studio solution. Over time I’ve evolved a folder structure within the solution to separate templates, DSC files and script files. You can see this structure in the screenshot below.

dsc modules
dsc modules

I keep all the DSC together like this because I can then simply zip all the files in the DSC folder structure to give me the archive that is deployed by the DSC extension. In the picture you will see that there are a number of .ps1 files in the root. Originally I created separate files for the DSC configuration of each of my VMs. I then collapsed those into the DSCvmConfigs.ps1 files and I simply haven’t removed the others from the project.

My DomainController configuration function began life as the example code from the three server SharePoint template on Github and I have since extended and modified it. The code is shown below:

configuration DomainController {
    param
    (
        [Parameter(Mandatory)]
        [String]$DomainName,
        [Parameter(Mandatory)]
        [System.Management.Automation.PSCredential]$Admincreds,
        [String]$DomainNetbiosName = (Get-NetBIOSName -DomainName $DomainName),
        [Int]$RetryCount = 20,
        [Int]$RetryIntervalSec = 30
    )
    Import-DscResource -ModuleName xComputerManagement, cDisk, xDisk, xNetworking, xActiveDirectory, xSmbShare, xAdcsDeployment
    [System.Management.Automation.PSCredential ]$DomainCreds = New-Object System.Management.Automation.PSCredential ("${DomainName}$($Admincreds.UserName)", $Admincreds.Password)
    $Interface = Get-NetAdapter | Where Name -Like "Ethernet*" | Select-Object -First 1
    $InteraceAlias = $($Interface.Name)
    Node localhost {
        WindowsFeature DNS {
            Ensure = "Present"
            Name = "DNS"    
        }
        xDnsServerAddress DnsServerAddress
        {
            Address        = '127.0.0.1'
            InterfaceAlias = $InteraceAlias
            AddressFamily  = 'IPv4'
        }
        xWaitforDisk Disk2
        {
            DiskNumber = 2
            RetryIntervalSec =$RetryIntervalSec
            RetryCount = $RetryCount
        }
        cDiskNoRestart ADDataDisk
        {
            DiskNumber = 2
            DriveLetter = "F"     
        }
        WindowsFeature ADDSInstall {
            Ensure = "Present"
            Name = "AD-Domain-Services"
        }
        xADDomain FirstDS
        {
            DomainName = $DomainName
            DomainAdministratorCredential = $DomainCreds
            SafemodeAdministratorPassword = $DomainCreds
            DatabasePath = "F:NTDS"
            LogPath = "F:NTDS"
            SysvolPath = "F:SYSVOL"
        }
        WindowsFeature ADCS-Cert-Authority {
            Ensure = 'Present'
            Name = 'ADCS-Cert-Authority'
            DependsOn = '[xADDomain]FirstDS'
        }     
        WindowsFeature RSAT-ADCS-Mgmt {
            Ensure = 'Present'
            Name = 'RSAT-ADCS-Mgmt'
            DependsOn = '[xADDomain]FirstDS'
        }   
        File SrcFolder {
            DestinationPath = "C:src"
            Type = "Directory"
            Ensure = "Present"
            DependsOn = "[xADDomain]FirstDS"
        }
        xSmbShare SrcShare
        {
            Ensure = "Present"
            Name = "src"
            Path = "C:src"
            FullAccess = @("Domain Admins", "Domain Computers")
            ReadAccess = "Authenticated Users"
            DependsOn = "[File]SrcFolder" 
        }
        xADCSCertificationAuthority ADCS
        {
            Ensure = 'Present'
            Credential = $DomainCreds
            CAType = 'EnterpriseRootCA'
            DependsOn = '[WindowsFeature]ADCS-Cert-Authority'
        }
        WindowsFeature ADCS-Web-Enrollment {
            Ensure = 'Present'    
            Name = 'ADCS-Web-Enrollment'
            DependsOn = '[WindowsFeature]ADCS-Cert-Authority'
        }
        xADCSWebEnrollment CertSrv
        {
            Ensure = 'Present'
            Name = 'CertSrv'
            Credential = $DomainCreds
            DependsOn = '[WindowsFeature]ADCS-Web-Enrollment', '[xADCSCertificationAuthority]ADCS'  
        }               
        LocalConfigurationManager {
            DebugMode = $true
            RebootNodeIfNeeded = $true
        } 
    } 
}

The .ps1 file contains all the DSC configurations for my environment. The DomainController configuration starts with a list of parameters. These match the ones being passed in by the DSC extension, or have default or calculated values. The import-dscresource command specifies the DSC modules that the configuration needs. I have to ensure that any I am using are included in the zip files downloaded by the extension. I am using modules that configure disks, network shares, active directory domains and certificate services.

The node section then declares my configuration. You can set configurations for multiple hosts in a single DSC configuration block, but I’m only concerned with the host I’m on – localhost. Within the block I then declare what I want the configuration of the host to be. It’s the job of the DSC modules to apply whatever actions are necessary to set the configuration to that which I specify. Just like in our resource template, DSC settings can depend on one another if something needs to be done before something else.

This DSC configuration installs the windows features needed for creating a domain controller. It looks for the additional drive on the VM and assigns it the drive letter F. It creates the new Active Directory domain and places the domain database files on drive F. Once the domain is up and running I create a folder on drive C called src and share that folder. I’m doing that because I create two certificates later and I need to make them available to other machines in the domain. More on that in a bit. Finally, we install the certificate services features and configure a certificate authority. The LocalConfigurationManager settings turn on as much debug output as I can and tell the system that if any of the actions in my config demand a reboot that’s OK – restart as and when required rather than waiting until the end.

I’d love to do all my configuration with DSC but sadly there just aren’t the modules yet. There are some things I just can’t do, like creating a new certificate template in my CA and then generating some specific templates for my ADFS services that are on other VMs. I also can’t set file rights on a folder, although I can set rights on a share. Notice that I grant access to my share to Domain Computers. Both the DSC modules and the custom script extension command are run as the local system account. When I try to read files over the network that means I am connecting to the share as the Computer account and I need to grant access. When I create the DC there are no other VMs in the domain, so I use the Domain Computers group to make sure all my servers will be able to access the files.

Once the DC module completes I have a working domain with a certificate authority.

The Custom Scripts

As with my DSC modules, I keep all the custom scripts for my VMs in one folder within the solution. All of these need to be uploaded to Azure storage so I can access them with the extension and copy them to my VMs. The screenshot below shows the files in the solution. I have a script for each VM that needs one, which is executed by the extension. I then have a file of shared functions and a zip with supporting modules that I need.

custom scripts
custom scripts

#
# DomainController.ps1 
#
param (  
    $vmAdminUsername,
    $vmAdminPassword,
    $fsServiceName,
    $tsServiceName,
    $resourceLocation
)
$password = ConvertTo-SecureString $vmAdminPassword -AsPlainText -Force 
$credential = New-Object System.Management.Automation.PSCredential("$env:USERDOMAIN$vmAdminUsername", $password)
Write-Verbose -Verbose "Entering Domain Controller Script" 
Write-Verbose -verbose "Script path: $PSScriptRoot"
Write-Verbose -Verbose "vmAdminUsername: $vmAdminUsername" 
Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword" 
Write-Verbose -Verbose "fsServiceName: $fsServiceName"
Write-Verbose -Verbose "tsServiceName: $tsServiceName"
Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN" 
Write-Verbose -Verbose "resourceLocation: $resourceLocation"
Write-Verbose -Verbose "==================================="
# Write an event to the event log to say that the script has executed.
$event = New-Object System.Diagnostics.EventLog("Application")
$event.Source = "tuServEnvironment"
$info_event = [System.Diagnostics.EventLogEntryType]::Information
$event.WriteEntry("DomainController Script Executed", $info_event, 5001)
Invoke-Command  -Credential $credential -ComputerName $env:COMPUTERNAME -ScriptBlock {
    param (
        $workingDir,
        $vmAdminPassword,
        $fsServiceName,
        $tsServiceName,
        $resourceLocation
    )
    # Working variables
    $serviceAccountOU = "Service Accounts"
    Write-Verbose -Verbose "Entering Domain Controller Script"
    Write-Verbose -verbose "workingDir: $workingDir"
    Write-Verbose -Verbose "vmAdminPassword: $vmAdminPassword"
    Write-Verbose -Verbose "fsServiceName: $fsServiceName"
    Write-Verbose -Verbose "tsServiceName: $tsServiceName"
    Write-Verbose -Verbose "env:UserDomain: $env:USERDOMAIN"
    Write-Verbose -Verbose "env:UserDNSDomain: $env:USERDNSDOMAIN"
    Write-Verbose -Verbose "env:ComputerName: $env:COMPUTERNAME"
    Write-Verbose -Verbose "resourceLocation: $resourceLocation"
    Write-Verbose -Verbose "==================================="
    # Write an event to the event log to say that the script has executed.
    $event = New-Object System.Diagnostics.EventLog("Application")
    $event.Source = "tuServEnvironment"    
    $info_event = [System.Diagnostics.EventLogEntryType]::Information 
    $event.WriteEntry("In DomainController scriptblock", $info_event, 5001)  
    #go to our packages scripts folder   
    Set-Location $workingDir
    $zipfile = $workingDir + "PSPKI.zip"
    $destination = $workingDir
    [System.Reflection.Assembly]::LoadWithPartialName("System.IO.Compression.FileSystem") | Out-Null  
    [System.IO.Compression.ZipFile]::ExtractToDirectory($zipfile, $destination)  
    Import-Module .tuServDeployFunctions.ps1    
    #Enable CredSSP in server role for delegated credentials  
    Enable-WSManCredSSP -Role Server -Force   
    #Create OU for service accounts, computer group; create service accounts   
    Add-ADServiceAccounts -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU -password $vmAdminPassword  
    Add-ADComputerGroup -domain $env:USERDNSDOMAIN -serviceAccountOU $serviceAccountOU   
    Add-ADComputerGroupMember -group "tuServ Computers" -member ($env:COMPUTERNAME + '$')    
    #Create new web server cert template  
    $certificateTemplate = ($env:USERDOMAIN + "_WebServer")  
    Generate-NewCertificateTemplate -certificateTemplateName $certificateTemplate -certificateSourceTemplateName "WebServer"
    Set-tsCertificateTemplateAcl -certificateTemplate $certificateTemplate -computers "tuServComputers"
    # Generate SSL Certificates
    $fsCertificateSubject = $fsServiceName + "." + ($resourceLocation.Replace(" ", "")).ToLower() + ".cloudapp.azure.com"   
    Generate-SSLCertificate -certificateSubject $fsCertificateSubject -certificateTemplate $certificateTemplate  
    $tsCertificateSubject = $tsServiceName + ".northeurope.cloudapp.azure.com"  
    Generate-SSLCertificate -certificateSubject $tsCertificateSubject -certificateTemplate $certificateTemplate   
    # Export Certificates 
    $fsCertExportFileName = $fsCertificateSubject + ".pfx"  
    $fsCertExportFile = $workingDir + "" + $fsCertExportFileName  
    Export-SSLCertificate -certificateSubject $fsCertificateSubject -certificateExportFile $fsCertExportFile -certificatePassword $vmAdminPassword
    $tsCertExportFileName = $tsCertificateSubject + ".pfx"
    $tsCertExportFile = $workingDir + "" + $tsCertExportFileName
    Export-SSLCertificate -certificateSubject $tsCertificateSubject -certificateExportFile $tsCertExportFile -certificatePassword $vmAdminPassword
    #Set permissions on the src folder   
    $acl = Get-Acl c:src  
    $acl.SetAccessRuleProtection($True, $True)  
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Domain Computers", "FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
    $acl.AddAccessRule($rule)
    $rule = New-Object System.Security.AccessControl.FileSystemAccessRule("Authenticated Users", "FullControl", "ContainerInherit, ObjectInherit", "None", "Allow")
    $acl.AddAccessRule($rule)
    Set-Acl c:src $acl
    #Create src folder to store shared files and copy certs to it
    Copy-Item -Path "$workingDir*.pfx" c:src  
} -ArgumentList $PSScriptRoot, $vmAdminPassword, $fsServiceName, $tsServiceName, $resourceLocation

The domain controller script is shown above. There are a whole bunch of write-verbose commands that output debug which I can see through the Azure Resource Explorer as the script runs.

Pretty much the first thing I do here is an invoke-command. The script is running as local system and there’s not much I can actually do as that account. My invoke-command block runs as the domain administrator so I can get stuff done. Worth noting is that the invoke-command approach makes accessing network resources tricky. It’s not an issue here but it bit me with the ADFS and WAP servers.

I unzip the PSPKI archive that has been copied onto the server and load the modules therein. The files are downloaded to a folder that is in a structure including the version number of the script extension so I can’t be explicit. Fortunately I can use the $PSScriptRoot variable to workout that location and I pass it into the invoke-command as $workingDir. The PSKPI modules allow me to create a new certificate template on my CA so I can generate new certs with exportable private keys. I need the same certs on more than one of my servers so I need to be able to copy them around. I generate the certs and drop them into the src folder I created with DSC. I also set the rights on that src folder to grant Domain Computers and Authenticated Users access. The latter is probably overdoing it, since the former should do what I need, but I spent a good deal of time being stymied by this so I’m taking a belt and braces approach.

The key functions called by the script above are shown below. Held in my modules file, these are all focused on certificate functions and pretty much all depend on the PSPKI modules.

function Generate-NewCertificateTemplate {
    [CmdletBinding()]
    # note can only be run on the server with PSPKI eg the ActiveDirectory domain controller
    param
    (
        $certificateTemplateName,
        $certificateSourceTemplateName
    )
    Write-Verbose -Verbose "Generating New Certificate Template"
    Import-Module .PSPKIpspki.psm1
    $certificateCnName = "CN=" + $certificateTemplateName
    $ConfigContext = ([ADSI]"LDAP://RootDSE").ConfigurationNamingContext
    $ADSI = [ADSI]"LDAP://CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext"
    $NewTempl = $ADSI.Create("pKICertificateTemplate", $certificateCnName)
    $NewTempl.put("distinguishedName", "$certificateCnName,CN=Certificate Templates,CN=Public Key Services,CN=Services,$ConfigContext")
    $NewTempl.put("flags", "66113")
    $NewTempl.put("displayName", $certificateTemplateName)
    $NewTempl.put("revision", "4")
    $NewTempl.put("pKIDefaultKeySpec", "1")
    $NewTempl.SetInfo()
    $NewTempl.put("pKIMaxIssuingDepth", "0")
    $NewTempl.put("pKICriticalExtensions", "2.5.29.15")
    $NewTempl.put("pKIExtendedKeyUsage", "1.3.6.1.5.5.7.3.1")
    $NewTempl.put("pKIDefaultCSPs", "2,Microsoft DH SChannel Cryptographic Provider, 1,Microsoft RSA SChannel Cryptographic Provider")
    $NewTempl.put("msPKI-RA-Signature", "0")
    $NewTempl.put("msPKI-Enrollment-Flag", "0")
    $NewTempl.put("msPKI-Private-Key-Flag", "16842768")
    $NewTempl.put("msPKI-Certificate-Name-Flag", "1")
    $NewTempl.put("msPKI-Minimal-Key-Size", "2048")
    $NewTempl.put("msPKI-Template-Schema-Version", "2")
    $NewTempl.put("msPKI-Template-Minor-Revision", "2")
    $NewTempl.put("msPKI-Cert-Template-OID", "1.3.6.1.4.1.311.21.8.287972.12774745.2574475.3035268.16494477.77.11347877.1740361")
    $NewTempl.put("msPKI-Certificate-Application-Policy", "1.3.6.1.5.5.7.3.1")
    $NewTempl.SetInfo()
    $WATempl = $ADSI.psbase.children | where { $_.Name -eq $certificateSourceTemplateName }
    $NewTempl.pKIKeyUsage = $WATempl.pKIKeyUsage
    $NewTempl.pKIExpirationPeriod = $WATempl.pKIExpirationPeriod
    $NewTempl.pKIOverlapPeriod = $WATempl.pKIOverlapPeriod
    $NewTempl.SetInfo()
    $certTemplate = Get-CertificateTemplate -Name $certificateTemplateName
    Get-CertificationAuthority | Get-CATemplate | Add-CATemplate -Template $certTemplate | Set-CATemplate 
}
function Set-tsCertificateTemplateAcl {
    [CmdletBinding()]
    param
    (
        $certificateTemplate,
        $computers
    )
    Write-Verbose -Verbose "Setting ACL for cert $certificateTemplate to allow $computers"
    Write-Verbose -Verbose "---"
    Import-Module .PSPKIpspki.psm1        
    Write-Verbose -Verbose "Adding group $computers to acl for cert $certificateTemplate"
    Get-CertificateTemplate -Name $certificateTemplate | Get-CertificateTemplateAcl | Add-CertificateTemplateAcl -User $computers -AccessType Allow -AccessMask Read, Enroll | Set-CertificateTemplateAcl  
} 
function Generate-SSLCertificate {
    [CmdletBinding()]
    param
    (   
        $certificateSubject,
        $certificateTemplate  
    )
    Write-Verbose -Verbose "Creating SSL cert using $certificateTemplate for $certificateSubject"
    Write-Verbose -Verbose "---"
    Import-Module .PSPKIpspki.psm1
    Write-Verbose -Verbose "Generating Certificate (Single)"
    $certificateSubjectCN = "CN=" + $certificateSubject
    # Version #1
    $powershellCommand = "& {get-certificate -Template " + $certificateTemplate + " -CertStoreLocation Cert:LocalMachineMy -DnsName " + $certificateSubject + " -SubjectName " + $certificateSubjectCN + " -Url ldap:}"
    Write-Verbose -Verbose $powershellCommand
    $bytes = [System.Text.Encoding]::Unicode.GetBytes($powershellCommand)
    $encodedCommand = [Convert]::ToBase64String($bytes)
    Start-Process -wait "powershell.exe" -ArgumentList "-encodedcommand $encodedCommand" 
} 
function Export-SSLCertificate {
    [CmdletBinding()]
    param
    (
        $certificateSubject,
        $certificateExportFile,
        $certificatePassword
    )  
    Write-Verbose -Verbose "Exporting cert $certificateSubject to $certificateExportFile with password $certificatePassword"
    Write-Verbose -Verbose "---"
    Import-Module .PSPKIpspki.psm1
    Write-Verbose -Verbose "Exporting Certificate (Single)"
    $password = ConvertTo-SecureString $certificatePassword -AsPlainText -Force
    Get-ChildItem Cert:LocalMachineMy | where { $_.subject -match $certificateSubject -and $_.Subject -ne $_.Issuer } | Export-PfxCertificate -FilePath $certificateExportFile -Password $password 
}

Making sure it’s reusable

One of the things I’m trying to do here is create a collection of reusable configurations. I can take my DC virtual machine config and make it the core of any number of deployments in future. Key stuff like domain names and machine names are always parameterised all the way through template, DSC and scripts. When Azure Stack arrives I should be able to use the same configuration on-prem and in Azure itself and we can use the same building blocks for any number of customer projects, even though it was originally built for an internal project.

There’s stuff I need to do here: I need to pull the vNet template directly into the DC template – there’s no need for it to be separate; I could do with trimming back some of the access rights I grant on the folders and shares that are unnecessary; you’ll also notice that I am configuring CredSSP which was part of my original attempt to sort out file access from within the invoke-command blocks and failed miserably.

A quick round of credits

Whilst most of this work has been myself, bashing my head against the desk for a while, it is built upon code created by other people who need to be referenced:

  • The Azure Quick Start templates were invaluable in having something to look at in the early days of resource templates before the tooling was here.
  • PSPKI is a fantastic set of PowerShell modules for dealing with certs.
  • The individual VM scripts are derived from work that was done in Black Marble by Andrew Davidson and myself to build the exact same environment in the older Azure manner without resource templates.