Deploying an Azure Container App Environment within a virtual network using bicep

Posted by Rik Hepworth on Friday, May 17, 2024

When working on a project recently I needed to deploy a Container App Environment within a virtual network in Azure. Thanks to the joys of internet search, I started off reading the wrong bits of the official documention and got incredibly confused, and much of the community content about this uses out of date schemas and code. This article is so I don’t need to go through that again, and hopefully it will help others, too.

What’s the problem?

Pretty much every organisation I work with these days stipulates that all their cloud apps must be deployed within a virtual network. Within Azure there are several different ways services use to meet this need, varying from service to service: Some deploy within a subnet, provisioning resources directly; others use private endpoints, providing a secure connection into the network over which we can communicate with the service instance.

Container Apps has been a rapidly evolving service, and because of that it has two ways of deploying within a virtual network. From a distance they look very similar, but they behave differently, and the way you get the service to use one method or another is really not obvious.

A quick note about the way we don’t want to do this

If you hit the wrong page of documentation first as I did, things feel pretty straightforward. To connect your Container Apps Environment into the virtual network you will need to create a dedicated subnet within your virtual network. That subnet is used to connect both the service components and the containers themselves, so a /23 address space is needed.

You then add a simple few properties to get the service deploying within the virtual network:

    vnetConfiguration: {
      internal: true
      infrastructureSubnetId: vNetSubnet.id
    }

The internal property tells the service we want to connect to a virtual network, and the infrastructureSubnetId takes the resource Id of the subnet we want to deploy into.

That sounds great, right? The trouble is that when the service deploys, it also creates a resource group for some of its components, and the name of that resource group is autogenerated. That means it doesn’t meet any of the naming policies in your tenant, if there are any (and who doesn’t have policies for that, these days?). I would try to deploy and either get blocked by policy, or somebody would spot the resource group and delete it without my knowledge.

I looked at the API documentation for Microsoft.App/managedEnvironments and found the infrastructureResourceGroup property, but no matter what I specified the deployment would ignore it.

An example for how we do want to do this

It turned out that there is a second way to deploy within a virtual network. That way was added later by the team, and the documentation is quite detailed, but it doesn’t really make clear how you make the service deploy in the new way as opposed to the old.

The answer turns out to be both simple and not obvious. When deploying in a virtual network we can optionally define one or more workload profiles - these allow us to specify dedicated compute as well as consumption.

    workloadProfiles: [
      {
        name: 'Consumption'
        workloadProfileType: 'Consumption'
      }
    ]

If you add any workload profiles at all, the service switches from the old way of deploying to a virtual network to the newer, better way. I hadn’t done this because I wanted to use consumption and the documentaion told me this was the default, so I didn’t think I needed to specify anything.

Adding a Consumption workload profile suddenly changed how things deployed. This meant:

  • The resource group name I was specifying in infrastructureResourceGroup was now honoured.
  • The way the service deploys changes and as a result containers do not consume IP addresses from the dedicated virtual network. This means we can go from the very large /23 to a much smaller /27 address space.

Deploying the environment

I use separate bicep modules for different resources. This means that I create my virtual network and subnets in one module, a long analytics workspace for telemtry in another, and my container app environment in a third.

In the container app environment module I reference the existing virtual network and log analytics resources so I can connect them to my new container app environment.

I use parameters to compose my resource names, and I use the same parameters in tags.

Tip: The hidden-title tag is used by the portal and is great for putting human-readable names on your resources without compromising your resource naming policies. The content of the tag is dispaly in parentheses after the resource name in the portal.

Tip: In my opinion, the bicep is source code that is used to generate a compiled artefact - the ARM template. There is a field in the ARM template nameed contentVersion. We stamp that with the build version of our artefact, and the bicep reference deployment().properties.template.contentVersion enables us to put that value into a tag so it’s visible in the portal as to which version of my code was used to deploy the infrastructure.

// ** Variables **
// ***************

var ContainerAppEnvironmentName = toLower('cae-${projectName}-${environment}-${location}')
var LogAnalyticsWorkspaceName = toLower('log-${projectName}-${environment}-${location}')
var vnetName = toLower('vnet-${projectName}-${environment}-${location}')
var InfrastructureResourceGroupName = toLower('rg-${environment}-${projectName}-caeinfra')

var subnetName = 'ContainerApps'

I need to reference the virtual network in order to then reference the subnet.

// Reference existing Log Analytics Workspace
resource LogAnalyticsWorkspace 'Microsoft.OperationalInsights/workspaces@2023-09-01' existing = {
  name: LogAnalyticsWorkspaceName
}

// Reference existing VNet
resource vNet 'Microsoft.Network/virtualNetworks@2023-09-01' existing = if (connectToVnet) {
  name: vnetName
}

// Reference existing Subnet
resource vNetSubnet 'Microsoft.Network/virtualNetworks/subnets@2023-09-01' existing = if (connectToVnet) {
  parent: vNet
  name: subnetName
}

The bicep for the Container App Environment references the log analytics worksapce and uses the listkeys function to specify the access key directly.

The workload profile section tells the service that I want to use the newer way of connecting to my virtual network, and the infrastructureResourceGroup can now follow our organisational naming convention.

// Deploy Container App Environment
resource ContainerAppEnvironment 'Microsoft.App/managedEnvironments@2023-08-01-preview'= {
  name: ContainerAppEnvironmentName
  location: location
  tags: {
    project: projectName
    ApplicationName: projectName
    environment: environment
    owner: owner
    displayName: 'Container App Environment'
    'hidden-title': 'Container App Environment'
    version: deployment().properties.template.contentVersion
  }
  properties: {
    appLogsConfiguration: {
      destination: 'log-analytics'
      logAnalyticsConfiguration: {
        customerId: LogAnalyticsWorkspace.properties.customerId
        sharedKey: LogAnalyticsWorkspace.listKeys().primarySharedKey
      }
    }
    workloadProfiles: [
      {
        name: 'Consumption'
        workloadProfileType: 'Consumption'
      }
    ]
    infrastructureResourceGroup: InfrastructureResourceGroupName
    vnetConfiguration: {
      internal: true
      infrastructureSubnetId: vNetSubnet.id
    }
    zoneRedundant: (environment == 'prod') ? true : false
  }
}

Creating a private DNS zone for our container apps

Creating the environment is all well and good, but you won’t be able to actually connect to your containers unless you also create a private DNS zone. Once again, the service autogenerates a subdomain for your instance, so we need to reference the defaultDomain property.

// Create Private DNS Zone for website domain
resource PrivateDNSzone 'Microsoft.Network/privateDnsZones@2020-06-01' = {
  name: ContainerAppEnvironment.properties.defaultDomain
  location: 'global'
  tags: {
    project: projectName
    ApplicationName: projectName
    environment: environment
    owner: owner
    displayName: 'Private DNS Zone'
    'hidden-title': 'Private DNS Zone'
    version: deployment().properties.template.contentVersion
  }
}

Don’t forget to connect that private DNS zone to either your virtual network, or you hub network if you’re using a hub-spoke architecture with a private DNS resolver

// Connect private dns zone to virtual network
resource PrivateDNSzoneNetLink 'Microsoft.Network/privateDnsZones/virtualNetworkLinks@2020-06-01' = {
  name: 'link_to_${vnetName}'
  parent: PrivateDNSzone
  location: 'global'
  properties: {
    registrationEnabled: false
    virtualNetwork: {
      id: vNet.id
    }
  }
}

Then, finally, we need to create a DNS record so other services can resolve the names of our containers. The record points at the IP address of the load balancer within the Container App Environment using the staticIp property of the resource.

resource record 'Microsoft.Network/privateDnsZones/A@2020-06-01' = {
  parent: PrivateDNSzone
  name: '*'
  properties: {
    ttl: 3600
    aRecords: [
      {
        ipv4Address: ContainerAppEnvironment.properties.staticIp
      }
    ]
  }
}