Define Once, Deploy Everywhere (Sort of...)

Posted by Rik Hepworth on Thursday, March 2, 2017

Using Lability, DSC and ARM to define and deploy multi-VM environments

Configuration as code crops up a lot in conversation these days. We are searching for that DevOps Nirvana of a single definition of our environment that we can deploy anywhere.

The solution adopted at Black Marble by myself and my colleagues is not quite that, but it comes close enough to satisfy our needs. This document details the technologies and techniques we adopted to achieve our goal, which sounds simple, right?

I want to be able to deploy a collection of virtual machines to my own computer using Hyper-V, to Dev/Test Labs in Azure, and to Azure Stack, using the same description of those virtual machines and their configuration.

Defining Our Platforms

Right now, we use Lab Manager (part of Team Foundation Server) at Black Marble to manage multi-VM environments for testing, hosted on a number of servers managed by System Center Virtual Machine Manager. Those labs are composed of virtual machines that can also be deployed to a developer’s workstation.

The issue is that those environments are pre-built – the machines are configured and the environment saved as a whole. They must be patched when a new lab is created from the stored ‘template’ VMs and adding a new machine to the lab is a pain.

Lab Manager itself is now a end-of-life, so we are looking at alternatives (including Azure Stack – see below).

Microsoft Azure

We already use Azure to host virtual machines. However, even with the lower cost Dev/Test subscription type, running lots of machines in the public cloud can get very expensive.

Azure Dev/Test Labs helps to mitigate this cost issue somewhat by providing a governance wrapper. I can create a Lab and apply rules, such as what types of virtual machine can be created, and automatically shut down running VMs at a set time to limit costs.

Within Azure we use Azure Resource Templates, which are JSON declarations of the services we require, to deploy our virtual machines. Once running, we have extensions that can be injected into a VM and used to execute scripts to configure them. With Windows servers, that means using the Desired State Configuration (DSC) extension.

Dev/Test labs allows me to connect to a Git repository of artefacts. Those artefacts could be items I wish to install into a VM, but they can also be ARM templates to deploy complex environments of multiple VMs. Those ARM templates can then apply DSC configuration definitions to the VMs themselves.

Microsoft Azure Stack

Stack is coming soon. Right now, you can download a Technical Preview that runs on a single machine. Stack is aimed at organisations that have stuff they cannot put in the public cloud, for whatever reason, but want a consistent approach to their development that can span private and public cloud. The final form of Stack is expected to be similar to the current Cloud Platform Solution (CPS), which is way out of my budget. However, the POC runs on a server very close in specification and price point to my existing Lab Manager-controlled servers.

Stack aims to deliver parity with its public cloud older brother. That means that I can use the same ARM templates I use in Azure to deploy my IaaS services on Stack. I have the same DSC extension to inject my configuration, too.

What I don’t have right now on Stack (and it’s unclear what the final product will bring, so I won’t speculate) are the base operating system images that are provided by Microsoft in Azure. I can, however, create my own images and upload them to the internal Stack equivalent of the Azure Marketplace.

Hyper-V

On our desktops, laptops, and servers we use Hyper-V, Microsoft’s virtualisation technology. This offers some parity with Azure – it uses the same VHD disk file format, for example. I don’t get the same complex software-defined-networking but I still get virtual switches to which I can connect machines, and they can be private, internal, or external.

Private switches do what they say on the tin: They are a bubble within which my VMs can communicate with each other but not with the outside world. I can, therefore, have multiple identical bubbles all using the same IP address ranges without issue.

External switches are connected directly to a network adapter on the host. That’s really useful if I need to host servers that deliver services to my organisation, as I need to communicate with them directly. This is great on servers, and is useful on developer workstations with physical NICs. On laptops, however, it gets tricky if you’re using a WiFi network. Those were never designed with VMs in mind, and the way Windows connects an external switch to a wireless adapter is, quite frankly, a horrible kludge and I’ve always found it terribly unreliable.

Internal switches create a new virtual NIC on the host so it can communicate directly with VMs on the network. In Windows 10, we can use an internal switch alongside a NetNat, which allows Windows 10 to provide network address translation for the virtual network. This gives us a setup like your home internet – VMs can communicate out but there are no direct inbound connection allowed (yes, I know you can create NAT publishing rules too, but that’s not a topic for here).

One cool thing about a NetNat is that if you carefully define your IP address ranges, a single NetNat can pass traffic into the networks generated by multiple virtual switches. This allows me to have multiple environments that can coexist on separate subnets.

Lability

I’ve saved this until last because it’s sort of the secret sauce in what we’ve been working on. I stumbled on Lability totally by chance, and random internet searching. It’s an open source solution to defining and deploying VMs on Windows using DSC to declare both the configuration of the environment (the VMs and their settings) and the VMs themselves (the guest OS configuration).

Lability was created by a chap called Iain Brighton and he deserves a great deal of credit for what he’s built.

With Lability, I can use the same DSC configurations that I created for my Azure deployments. I can use the same base VHD images that I need for my Azure Stack Deployments. Lability uses a DSC PowerShell file (.ps1), which can include configurations for multiple nodes – each of the VMs in our environment. It then uses a PowerShell Data file (.psd1) to declare the configuration of the VMs themselves (CPU, RAM, virtual switch etc) as well as pass in configuration details to the DSC file.

If you look at the Lability repo on GitHub you will find links to some excellent articles by people who have used Lability and take you through setting up your Lability Host (your computer) and your first environment.

Identifying Differences

Applying DSC

Lability and the Azure DSC extension work in a subtly but importantly different manner. When you create a DSC configuration, you write a PowerShell configuration which imports DSC Resources that will do the actual configuration work and you call those resources with specified values that declare the state of the configuration you want. Within that PowerShell file you can put functions that figure out some of those values.

When you execute the PowerShell configuration, it runs through that script and generates a MOF file. That file is submitted to the DSC engine on the machine that you are configuring and used to pass parameters into the DSC Resources that are going to execute commands to apply your configuration.

When you use the DSC extension in Azure, it installs the necessary DSC resources on the VM and executes the PowerShell file on that machine, generating the MOF which is then applied.

When you use Lability, the PowerShell file is executed on the host machine and outputs the MOF files – you do this manually before executing a Lability command to create a new lab. Lability then takes care of injecting the MOF and the required DSC resources into the virtual machine, where the configuration is applied.

This is a critical difference! If you look at the examples in the Azure Quickstart Repo, all the DSC is written assuming that it is executed on the host, and uses PowerShell functions to do things like finding the network adapter, or the host IP address etc. If you look at the examples used in Lability labs, the data file provides many of those pieces of information. If you run the PowerShell from an Azure QuickStart template you’ll have some crazy failures, because all those functions execute on the host and therefore get totally incorrect information to pass to the configuration code.

Additionally, none of the Azure examples use a data file to provide configuration data. You might think this is because the data file is not supported. However, this is not true – you can pass a data file in using the DSC extension. Lability makes heavy use of that data file to define our environment.

Networking

In Azure, you cannot set a static IP address from within the VM itself. The networking fabric hands the machine its IP address via DHCP. You can set that IP to be static through the Azure fabric, but not through the VM. That might mean that we don’t know the IP address of a machine before we deploy it.

With Lability, we declare the IP address of the VM in the DSC data file. We could use a DHCP server running on the host, and I do just that myself, but it’s more stuff to install and manage, and for our approach to labs right now we’ve stuck to declaring address in the DSC data file.

We also have additional stuff to think about in Azure – public IP addresses, Network Security Groups and possibly User Defined Routing that controls how (and if) we allow inbound traffic from the internet onto our network, what can talk to what and on which ports within our network, and whether we want to push all traffic through appliances for security.

Azure API Versions

When you write an ARM template to define and deploy your services, each of the resources in that template is defined against a versioned API. You specify which API version you are using in the template, and different resource providers have different versions.

Azure Stack dos not have all the same versions of the various APIs that are in Azure. Ironically, whilst I have had to make few changes to existing ARM templates in terms of their content in order to successfully use them on Stack, I’ve had to change almost every API version referenced in them. Having said that, I am finding that the API versions I reference for Stack by and large work unchanged if I throw the template at Azure.

Declaring Specific Goals

We’ve discussed our target platforms and talked about how those differ in terms of our deployment configurations. Let’s talk about what our aims were as we embarked on our project to manage VM labs:

  1. All labs should deploy from greenfield. One of our biggest pain points with our old approach was that our labs were built as a collection of VMs. We couldn’t change the name of the AD domain; changing IP address was complex; adding new VMs was painful; patching a ‘new’ environment could take hours.
    We were very clear that we wanted to create all new labs from base media which we would try to keep current for patches (at least within a few months) and would allow us to create any number of machines and environments.
  2. There should be one configuration for each guest VM, which would be used everywhere. We were very clear that we would create one DSC configuration for each role that we needed (for example, a Domain Controller or an ADFS server) and that configuration would be used whether we were creating a lab on a local machine, in Azure or Azure Stack.
  3. Maintain a distinction between a virtual machine configuration and an environment configuration. We are building a collection of virtual Lego with our VM configurations. Our teams can combine those Lego bricks into environments that may be project specific. There should be a configuration for those environments. We should never alter an existing configuration for a new environment – we should create a new configuration using the existing one as a base (for example, we need additional roles on our DC for some reason).
  4. Take a common approach with Lability and Azure, whilst accepting we have to maintain two sets of resources.
    Our approach to Azure environments is already modular. We have templates for VMs that are combined into environments through Nested Deployments. This would not change. Our VM definitions would encompass a DSC configuration and an ARM template. Our environments would include both a DSC data file and an ARM template.
  5. Manage and automated the creation of base media. We would need a variety of base VHD files, analogous to the existing marketplace images in Azure: Windows Server (numerous versions), SQL Server, SharePoint, etc. Each of these must be created using scripts so they could be periodically rebuilt to achieve our goal of avoiding time consuming patching of new environments. In short, we would need an Image Factory.
  6. Setup and use should be straightforward. We need our developers to be able to install all the tooling and get a new lab up and running quickly. We need easy integration with Azure Dev/Test Labs, etc. This would need some process automation around the build and release of the VM configurations and anything else we would create as part of the project.

Things You Will Need

If you want to build the same Lab solution as we did you’re going to need a few things:

  1. Git Repository. All the code and configurations we create are ultimately stored in a central Git Repo. We are using Visual Studio Team Services, as it’s our chosen source control platform.
    Why Git? Two reasons: First of all, it allows us to easily deploy our solution to a developer workstation by simply cloning the repo. Second, Azure DevTest Labs needs a Git Repo to store Artifacts (our ARM templates) for deployment of environments.
  2. Build/Release automation. When we commit to our shared repo, our Build server executes some PowerShell to create deployment artifacts for Azure. It creates Zip archives from our configurations to be used with the DSC extension. It makes no sense to create these by hand and waste space in our repo. Our Release pipeline then automatically pushes our artifacts to an Azure storage account that can be accessed by our developers as a single, central store for VM configurations.
  3. Private PowerShell Repository. We use ProGet to provide a local Nuget/PowerShell/NPM etc repository. We had this in place before we started this project, but it has proved invaluable. The simple reason is that we want to publish DSC Resources or easy consumption and installation by our team. You be surprised at how many times we’ve hit a bug in a DSC resource which has been fixed in the source code repo but a new version has not yet been published. Maintaining our own repository allows us to publish our own versions of DSC resources (and in some case our own bespoke resources).
  4. A server to host your Image Factory. I’m not going to spend time documenting this part of our solution. Far cleverer people than I have written about this and we followed their guidance. You need somewhere to host your images and run the scripts on a schedule to build new ones. Our builds run overnight and we place images on a Windows fileshare.
  5. An Azure subscription. If you want to use the same configuration for on-prem and cloud, saying that you need and Azure sub seems a little obvious. However, we are using nested deployments. These use resources that must be accessible to the Azure fabric at deploy time, and the easiest way to do that is to use Azure Storage. You’ll also need a subscription to host your DevTest lab if that’s your preferred approach. Note that you could have multiple subscriptions – our devs can use their MSDN Azure Benefit to host environments within their own DevTest lab, whilst the artefact store is on a corporate subscription and the artefact repo is in our VSTS.
  6. A code editor that understands PowerShell, DSC and ARM. I prefer Visual Studio and the Azure SDK, but Visual Studio Code is an equally powerful tool for creating and managing the files we are going to use.

Managing our VMs and Environments

After much thought, we came up with a standard folder structure and approach to our VM and environment configurations and the supporting scripts needed to deploy them.

In our code repo we have a the following folder structure:

Environments This folder contains a series of folders, one per environment.This folder is specified as that containing environment templates when the shared repo is connected to an Azure DevTest Lab
\Environment\MyEnv1 An environment folder contains three files:
\Environment\MyEnv1\MyEnv1.psd1 The psd1 data file must share the same name as the folder. This contains all the configuration settings for all VMs in our environment and is used by Lability and the VM DSC configs
\Environment\MyEnv1\azuredeploy.json For DevTest labs, the environment template used in Azure must be named azuredeploy.json. This template calls a series of other templates to deploy the virtual network and VMs to Azure
\Environment\MyEnv1\metadata.json This file is read by DevTest labs and provides a name and description for our environment
\VMs This folder contains subfolders for each of our component Virtual Machines.
\VMs\MyVM1 A VM folder contains at least two files:
\VMs\MyVM1\MyVM1.ps1 The ps1 configuration file must share the same name as the folder. It contains the DSC PowerShell to apply the configuration to the guest VM
\VMs\MyVM1\MyVM1.json The json file shares the folder name for consistency. It is called by the azuredeploy.json environment template to create the VM in Azure and Azure Stack
\Modules The Modules folder contains shared code of various types
\Modules\Scripts The scripts folder contains PowerShell scripts to install and configure our standard Lability deploy, wrapper the Lability create and remove commands and perform build and release tasks.
\Modules\Template The template folder holds common ARM templates that create standard elements shared between environments and called by the azuredeploy.json
\Modules\DSC This folder is used during the build process. All the DSC resources needed in an environment are downloaded to this folder. A script parses the VM DSC configurations called by an environment and creates Zip files to be uploaded into Azure storage that contain the correct DSC resources and DSC PowerShell for an environment

Wrapper Scripts for Lability

Lability is great but is built to work in a certain way. We have three scripts that perform key functions for our deployment.

Install Script

Our installation script performs the following function:

  1. Creates the C:\Virtualisation base folder we use to store VMs and the Lability working files.
  2. Sets the default Hyper-V locations for Virtual Machines and Virtual Hard disks to c:\Virtualisation
  3. Creates a new Internal Virtual Switch (named in accordance to our convention) and sets the IP address on the NIC created on the host to the required one. Our first switch creates a network of 192.168.254.0/24 and the host gets 192.168.254.1 as it’s IP address.
  4. Creates a new NetNat with an internal address prefix of 192.168.224.0/19. This will pass traffic into and out of up to thirty /24 subnets starting at 192.168.224.0/24, up to 192.168.254.0/24. We decided to work from the top down when creating new networks.
  5. Makes sure that the Nuget package provider is installed and registers our ProGet server as a new PowerShell repository. We then remove the default PowerShellGallery registration and make sure our repo is trusted.
  6. Check to see if Lability is installed and if not, we install it using Install-Module.
  7. Set the following Lability defaults using the Set-LabHostDefault command:
    ConfigurationPath: c:\Virtualisation\Configuration
    IsoPath: c:\Virtualisation\ISOs
    ParentVhdPath: c:\Virtualisation\MasterVirtualHardDisks
    DifferencingVhdPath: c:\Virtualisation\VMVirtualHardDisks
    ModuleCachePath: c:\Virtualisation\Modules
    ResourcePath: c:\Virtualisation\Resources
    HotfixPath: c:\Virtualisation\Hotfix
    RepositoryUri: <the URI of our ProGet Server, e.g. https://proget.mycorp.com/nuget/PowerShell/package>
  8. Set the default virtual switch for Lability environments to our newly created one using the Set-LabVMDefault command.
  9. Register our VHD base media by calling another script which loads a standard configuration data file. This is separate so we can perform this action independently.
  10. Set the Lability default media to our Windows Server 2012 R2 standard VDH using the Set-LabVMDefault command.
  11. Initialise Lability using our configuration with the Start-LabHostConfiguration command.

Once the install script has completed we have a fully configured host ready to deploy Lability labs.

Deploy-LocalLab script

Lability has a Start-LabConfiguration command which reads the psd1 configuration data file for an environment and creates the VMs. Before running that, however, you need to execute the PowerShell DSC scripts to generate the MOF files for each VM. Lability injects those, and the DSC resources, into the VMs. A second command, Start-Lab boot the VMs themselves, respecting boot order and delays that can be declared in the config file.

This is great unless you have a complex lab and need lots of DSC resources to make it work. Our wrapper script does the following, taking an environment name as a parameter:

  1. Reads the psd1 data file for our environment from the correct folder to identify the DSC resources we need (they are listed for Lability). It installs these resources so we can execute the PowerShell configuration scripts and generate the MOFs.
  2. Reads the psd1 data file to identify the VMs we are deploying. Based on the Role information in that file it will execute each of the configuration ps1 files from the VMs folder hierarchy, passing in the psd1 data file. The resultant MOFs get saved in the Lability configuration folder (c:\Virtualisation\Lability).
  3. Execute the Start-LabConfiguration command passing in the configuration data file.
  4. If we specify a -Start switch, the script starts the lab with the Start-Lab command.

Remove-LocalLab script

Our remove script takes the name of our environment as a parameter. It does the following:

  1. Identifies the VMs in the lab using the Get-LabVM command, passing in the psd1 data file. Check to see if any are running and if they are call the Stop-Lab command.
  2. Executes the Remove-LabConfiguration command, passing in the psd1 data file for the environment.

Virtual Machine Configuration

We’ve challenged ourselves to only use Desired State Configuration for our VMs. This has been a big change from our previous approach to Azure VMs, which mixed DSC with custom PowerShell scripts deployed with a separate Azure VM extension. This has raised four issues we had to solve:

  1. The list of DSC Resources is growing but not all-encompassing. There are many areas where no DSC modules exist. To overcome this, we have used a mix of SetScript code contained within a DSC configuration (which has some limitations) and bespoke DSC modules hosted in our ProGet repository.
  2. Existing Published DSC resources may contain bugs. In many cases code fixing those bugs has been supplied as pull requests but may be undergoing review, and sometimes no new release of the resource has been created. We now have our own separate code repository for DSC resources (including our own) where we keep these and we publish versions to our own repository. When a new official version including the fixes is released it will supersede our own.
  3. There are some good DSC resources out there on GitHub that aren’t published to the PowerShell gallery. We publish these into our own repository for access.
  4. Azure executes the DSC on the target VM to generate the MOF. Lability executes it on the host machine. That and other differences means that we have wrapper code to switch the config sections, mostly based on an input parameter named IsAzure. When called from the Azure DSC extension we specify that parameter and on a Lability host we don’t. I realise that purists will argue that this means we don’t really have a single configuration. I would counter that I have a single configuration file and therefore one thing to maintain. I don’t see any issue with logic inside that config deciding what happens.

Sample Configuration

Let’s illustrate our approach with an extract from a configuration. The code below is part of our DomainController config.

The config accepts some parameters. EnvPrefix is used to generate names within the environment. In Azure we use it to prefix our Azure resources. Within the environment it’s used to create things like the AD domain name. IsAzure tells the config whether it is being executed on the host or on the target VM inside Azure.

You’ll notice that we specify the DSC module versions. There are a few reasons why we do this – because some of the DSC resources are unofficial we want to make sure they come from our repository, and the way Lability downloads DSC resources from our ProGet Server means we need to specify a version number. Either way, we benefit from increased consistency – there have been some breaking changes between versions with the official DSC resources in the PowerShell Gallery!

If we’re in Azure we do things like find the network adapter through code and we don’t specify network addresses. We use the IsAzure parameter to wrapper this stuff in If blocks.

The configuration values come from the psd1 data file, regardless of whether we deploy to Azure or locally. We do this to enforce consistency. Even though we probably could have the Azure config self-contained in the script, we don’t.

Configuration DomainController {
    param(
        [ValidateNotNull()]
        [System.Management.Automation.PSCredential]$Credential,
        [string\]$EnvPrefix,
        [bool\]$IsAzure = $false,
        [Int]$RetryCount = 20,
        [Int\]$RetryIntervalSec = 30
        )
        Import-DscResource -ModuleName @{ModuleName="xNetworking";ModuleVersion="3.2.0.0"}
        Import-DscResource -ModuleName @{ModuleName="xPSDesiredStateConfiguration";ModuleVersion="6.0.0.0"}
        Import-DscResource -ModuleName @{ModuleName="xActiveDirectory";ModuleVersion="2.16.0.0"}
        Import-DscResource -ModuleName @{ModuleName="xAdcsDeployment";ModuleVersion="1.1.0.0"}
        Import-DscResource -ModuleName @{ModuleName="xComputerManagement";ModuleVersion="1.9.0.0"}
        $DomainName = $EnvPrefix + ".local"
        Write-Verbose "Processing Configuration DomainController"
        Write-Verbose "Processing configuration: Node DomainController"
        node $AllNodes.where({$_.Role -eq 'DomainController'}).NodeName {
            Write-Verbose "Processing Node: $($node.NodeName)"
            if ($IsAzure -eq $true) {
                #Find the first network adapter
                $Interface = Get-NetAdapter | Where-Object Name -Like "Ethernet\*" | Select\-Object \-First 1
                $InterfaceAlias = $($Interface.Name)
            }
            LocalConfigurationManager {
                RebootNodeIfNeeded = $true;
                AllowModuleOverwrite = $true;
                ConfigurationMode = 'ApplyOnly'
                CertificateID = $node.Thumbprint;
                DebugMode = 'All';
            }
            #ignore this is in Azure
            if ($IsAzure -eq $false) {
                # Set a fixed IP address if the config specifies one
                if ($node.IPaddress) {
                    xIPAddress PrimaryIPAddress {
                        IPAddress = $node.IPAddress;
                        InterfaceAlias = $node.InterfaceAlias;
                        PrefixLength = $node.PrefixLength;
                        AddressFamily = $node.AddressFamily;
                    }
                }
            }
            #ignore this is in Azure
            if ($IsAzure -eq $false) {
                # Set a default gateway if the config specifies one
                if ($node.DefaultGateway){
                    xDefaultGatewayAddress DefaultGateway {
                        InterfaceAlias = $node.InterfaceAlias;
                        Address = $node.DefaultGateway;
                        AddressFamily = $node.AddressFamily;
                    }
                }
            }
            # Set the DNS server if the config specifies one
            if ($IsAzure -eq $true) {
                if ($node.DnsAddress){
                    xDNSServerAddress DNSaddress {
                        Address \= $node.DnsAddress;
                        InterfaceAlias \= $InterfaceAlias;
                        AddressFamily \= $node.AddressFamily;
                    }
                }
            }
            else {
                if ($node.DnsAddress){
                    xDNSServerAddress DNSaddress {
                    Address = $node.DnsAddress;
                    InterfaceAlias = $node.InterfaceAlias;
                    AddressFamily = $node.AddressFamily;
                }
            }
        }
    }
#End configuration DomainController
}

Sample Data File

Below is a sample data file for an environment containing a Domain Controller and single domain-joined server. Note that the data file contains a mix of data to be processed by the DSC configuration and Lability-specific information that defines the environment, including VM settings and the required DSC resources. When we deploy the lab locally, Lability processes the file to create the Virtual Machines and their hard disks (and create new virtual switches if we declare them). When we deploy in Azure this information is ignored – we can safely use the same data file in both situations.

# Single Domain Controller Lab
@{
    AllNodes = @(
        @{
            # DomainController
            NodeName = "DC";
            Role = 'DomainController';
            DSdrive = 'C:';
            #Prevent credential error messages
            PSDscAllowPlainTextPassword = $true;
            PSDscAllowDomainUser = $true;
            # Networking
            IPAddress = '192.168.254.2';
            DnsAddress = '127.0.0.1';
            DefaultGateway = '192.168.254.1';
            PrefixLength = 24;
            AddressFamily = 'IPv4';
            DnsConnectionSuffix = 'lab.local';
            InterfaceAlias = 'Ethernet';
            # Lability extras
            Lability_Media = 'BM_Server_2012_R2_Standard_x64';
            Lability_ProcessorCount = 2;
            Lability_StartupMemory = 2GB;
            Lability_MinimumMemory = 1GB;
            Lability_MaximumMemory = 3GB;
            Lability_BootOrder = 0;
            Lability_BootDelay = 600;
        };
        @{
            # MemberServer
            NodeName = "SR01";
            Role = 'MemberServer';
            DSdrive = 'C:';
            #Prevent credential error messages
            PSDscAllowPlainTextPassword = $true;
            PSDscAllowDomainUser = $true;
            # Networking
            IPAddress = '192.168.254.3';
            DnsAddress = '192.168.254.2';
            DefaultGateway = '192.168.254.1';
            PrefixLength = 24;
            AddressFamily = 'IPv4';
            DnsConnectionSuffix = 'lab.local';
            InterfaceAlias = 'Ethernet';
            # Lability extras
            Lability_Media = 'BM_Server_2012_R2_Standard_x64';
            Lability_ProcessorCount = 2;
            Lability_StartupMemory = 2GB;
            Lability_MinimumMemory = 1GB;
            Lability_MaximumMemory = 3GB;
            Lability_BootOrder = 1;
        };
    );  
    NonNodeData = @{
        OrganisationName = 'Lab';
        Lability = @{
            EnvironmentPrefix = 'Lab-';
            DSCResource = @(
                @{ Name = 'xNetworking'; RequiredVersion = '3.2.0.0';}
                @{ Name = 'xPSDesiredStateConfiguration'; RequiredVersion = '6.0.0.0';}
                @{ Name = 'xActiveDirectory'; RequiredVersion = '2.16.0.0';}
                @{ Name = 'xAdcsDeployment'; RequiredVersion = '1.1.0.0';}@{ Name = 'xComputerManagement'; RequiredVersion = '1.9.0.0';}
            );
        }
    };
};  

Azure DSC Extension

Our Azure deployment uses the configuration and data file to configure the VM. The JSON for the DSC extension is shown below. Notice the following:

  1. The modulesUrl setting specifies a Zip file that contains the DSC resources and configuration ps1 file. We create these zip files as part of our build process and upload them to an Azure storage account.

  2. The configurationFunction setting specifies the name of the ps1 file to execute and the configuration within that we want to apply (a single file can contain more than one configuration, although ours don’t).

  3. We pass in the EnvPrefix variable and set the IsAzure value to 1 so our configuration executes the right code.

  4. The dataBlobUri within protectedSettings is our psd1 data file. The extension treats this as containing sensitive information – things held in this section are not displayed in any output from Azure Resource Manager.

In fairness, whilst at the moment we create JSON specific to each VM, I plan to refactor this to be common code that takes parameters rather than having an ARM template for each VM’s DSC.

{
    "name": "[concat(parameters('envPrefix'),parameters('vmName'),'/',parameters('envPrefix'),parameters('vmName'),'dsc')]",
    "type": "Microsoft.Compute/virtualMachines/extensions",
    "location": "[parameters('VirtualNetwork').Location]",
    "apiVersion": "[parameters('ApiVersion').VirtualMachine]",
    "dependsOn": [],
    "tags": {
        "displayName": "DomainController"
    },
    "properties": {
        "publisher": "Microsoft.Powershell",
        "type": "DSC",
        "typeHandlerVersion": "2.1",
        "autoUpgradeMinorVersion": true,
        "settings": {         
            "modulesUrl": "[concat(parameters('artifactsLocation'), '/Environments/', parameters('envConfig'),'/',parameters('envConfig'),'.zip', parameters('artifactsSasToken'))]",
            "configurationFunction": "DomainController.ps1\\DomainController",
            "properties": {
                "EnvPrefix": "[parameters('EnvPrefix')]",
                "Credential": {
                    "userName": "[parameters('adminUsername')]",
                    "password": "PrivateSettingsRef:adminPassword"
                },
                "IsAzure": 1
            }
        },
        "protectedSettings": {
            "dataBlobUri": "[concat(parameters('artifactsLocation'), '/Environments/', parameters('envConfig'), '/', parameters('envConfig'),'.psd1', parameters('artifactsSasToken'))]",
            "Items": {
                "adminPassword": "[parameters('adminPassword')]"
            }
        }
    }
}  

We don’t include the DSC extension within the ARM template that deploys the VM because by doing so we can sequence the deployment of configuration to deal with dependencies between servers.

Azure ARM Templates

The approach we take to deploying VMs in Azure has been consistent for some time now. My ResourceTemplates Repo in GitHub uses nested templates to deploy a three-server environment and we use exactly the same approach here. Our ‘master template’ is stored in the environment folder and it calls nested deploys for each VM, VM DSC extension and supporting stuff such as virtual networks. The VM and DSC templates are stored in the VM folder with the DSC config, and the supporting templates are in our Modules\Templates folder since they are shared.

Conclusion

This has been a very long article without a great deal of code in it. I hope this explains how we approach our environment definition and deployment. I plan to do more posts that document more specific elements of a configuration or an environment.

Ultimately, I’m not sure that the goal of a single definition that covers multiple platforms and both host and guest configurations exists. However, I think we’ve got pretty close with our solution and it has minimal rework involved, particularly once you have built up a good library of VM configs that you can combine into an environment.

I should also point out that we are not installing apps – we are deploying a platform onto which our developers and testers can then install the applications they develop. This means that we keep the environments quite generic. Deployment of apps is still scripted (and probably uses VSTS Release Management) but is not included in the configurations we build. Having said that, there is nothing stopping a team extending the DSC to deploy their applications and thus build a more bespoke definition.

I’ve spoken to quite a few people about what we’ve done over the past few weeks and, certainly within the Microsoft space many people want to do what we have done, but few were aware that tooling such as Lability and DSC were available to get it done. I hope this goes some way to plugging that gap.