Building an Azure IaaS and on-premise hybrid environment Part 2: DC and servers in the cloud

Posted by Rik Hepworth on Monday, November 4, 2013

This series includes the following posts:

  1. Building an Azure IaaS and on-premise hybrid environment Part 1: The plan and Azure Network Connection
  2. Building an Azure IaaS and on-premise hybrid environment Part 2: DC and servers in the cloud

This is part 2 of a series of posts bout building a hybrid network connecting Windows Azure and on-premise. For more background on what the goals are, and for information on how to create the Azure Network and connect the VPN tunnel between on-premise and cloud see part 1.

Creating a DC on our Azure Network

I’m going to create a new VM on Azure using the VM gallery. One important point when doing this is that you should add a second drive to the VM for domain controllers. This is down to how read/write caching works on the primary drive (it’s enabled)  which means there is a risk that a write operation may make it to the cache but not to the drive in the event of a failure. This would cause problems with AD synchronisation and for that reason we add a seond drive and disable caching on it so we can use it to host the AD database.

Before we create the new machine it’s a good idea to create a storage account. If we leave Azure to do it the account gets the usual random name. I prefer order and convention in these things, so I’ll create one myself.

storage 1
storage 1

When you create a storage account, Azure now creates a container within it named vhds and it uses that to hold the virtual hard disks for your VMs.

We can now create a virtual machine using the VM Gallery.

new vm 1
new vm 1

The Virtual Machine creation wizard will appear and show the numerous VM templates we can start from. I want a Server 2012 R2 DC so I’m going to choose Windows Server 2012 R2 Datacenter from the list.

new vm 2
new vm 2

The next screen allows us to set the VM name. This is also used for the Azure Endpoint and must be unique within Azure. We can also choose a size for the VM from the available Azure VMs. This is a lab so I’m happy with a small VM. In production you would size the VM according to your AD.

We also need to provide a username and password that Azure will configure when it deploys the VM. We’ll use that to connect to the machine in order to join it to the domain.

new vm 3
new vm 3

The next screen asks for a whole bunch of information about where the new VM will be placed and what networks it will be connected to. The wizard does a pretty good job of selecting the right defaults for most settings.

I created two subnets in my virtual network so I could have an internal and external subnets. The DC shouldn’t have connections from outside our network so it’s going on subnet-1.

new vm 4
new vm 4

The final screen allows us to configure the ports that will be available through the Azure endpoints. If we remove these then we will only be able to connect to the new VM via our internal network. That’s exactly what I want, so I will click the big X at the right hand side of each endpoint to remove it.

new vm 5
new vm 5

When we click the final button Azure will show us that our new VM is provisioning.

new vm 6
new vm 6

Once the VM is running you can click on it to view the dashboard. You will see from mine that the new VM has no public IP address and that it has been give an internal IP address of 172.16.1.4 – on the Azure network I created earlier. The first server that you connect to a virtual network subnet in Azure will always get .4 as it’s address; the second gets .5, etc. An important point to note here is that if a virtual machine is deallocated (when you shut it down from the Azure portal it will do this) the DHCP-given IP address is released and another server could get that address. It’s important to be careful about the order you start machines in for this reason.

vm dash
vm dash

I haven’t added a second hard disk to the VM, so that’s our next step. At the bottom of the dashboard there is an Attach button that allows us to add an empty disk to the VM.

attach disk 1
attach disk 1

In the screen that appears we can give our new disk a name and size and, importantly, set the type of caching we want on the disk. As I mentioned, everything I have read and heard tells me that caching on the disk holding the AD database should be turned off.

new disk 1
new disk 1

Now we’ve got the second disk attached, the next step is to make an RDP connection to our new server. We can do that from one of the machines on our on-premise network just by entering the ip address of the Azure-hosted server into the Remote Connection dialog.

Remember to use the credentials you set when you created the VM: e.g. azureucdc\builduser

rdp connection 1
rdp connection 1

The first thing we need to do is bring the additional disk online, create a volume and assign a drive letter. I’ve used S for sysvol.

dc add disk
dc add disk

Next, we need to join the server to our AD domain, which will need a reboot. After that we can add the Active Directory Domain Services role in order to promote the server to be a domain controller. It’s important when doing this to set the paths for the AD databases to the second drive (S in my case)

azure dcpromo
azure dcpromo

Once we’ve got our new DC and DNS up and running, we should configure our Azure network so it knows the IP address of our new DNS and hands it to other servers in our network.

To do that we register the DNS with Azure first.

azure dns 2
azure dns 2

Next we modify the configuration of our Azure virtual network to add the new DNS. The DNS addresses are handed out in the order they are specified in the Azure network, so I’ve removed the on-premise DNS then added first the one hosted in Azure and then the on-premise one.

azure dns 3
azure dns 3

We now have a functioning Azure network with services that will support any other machines we host there even if the VPN link goes down.

We’ll need some more VMs for our other services to support our connected Azure ADFS. We’ll deal with those in part 3.