Wednesday, March 15, 2017

Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 9 Managed Disks

It’s been a while since I’ve published an article in my series on automated deployments of RDS on Azure IaaS, but here is part 9! In case you’ve missed the previous 8 articles, here they are.

1. Full HA RDS 2016 deployment in Azure IaaS in < 30 minutes, Azure Resource Manager
2. RDS on Azure IaaS using ARM & JSON part 2 – demo at Microsoft Ignite!
3. Video of Ignite session showing RDS on Azure IaaS deployment using ARM/JSON
4. Windows Server 2016 GA available in Azure! – used it to deploy RDS on Azure IaaS!
5. Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 5
6. Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 6 RD
Gateway

7. Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 7 RD Web Access customization
8. Azure Resource Manager and JSON templates to deploy RDS in Azure IaaS – Part 8 Defender & BGinfo

clip_image002_thumb[1] This part 9 is all about Azure Managed Disks. Azure Managed Disks simplify disk management for Azure IaaS VMs a lot! With Managed Disks, the storage accounts associated with the VM disks are managed for you. You only have to specify the type (Premium or Standard) and the size of disk you need, and Azure creates and manages the disk for you.

Before we dive into improving the JSON Script with Managed Disks, let’s briefly touch on some of the most important advantages of Managed Disks; (source)

Simple and scalable VM deployment
Managed Disks handles storage for you behind the scenes. Previously, you had to create storage accounts to hold the disks (VHD files) for your Azure VMs. When scaling up, you had to make sure you created additional storage accounts so you didn’t exceed the IOPS limit for storage with any of your disks. With Managed Disks handling storage, you are no longer limited by the storage account limits (such as 20,000 IOPS / account). You also no longer have to copy your custom images (VHD files) to multiple storage accounts. You can manage them in a central location – one storage account per Azure region – and use them to create hundreds of VMs in a subscription.

Better reliability for Availability SetsManaged Disks provides better reliability for Availability Sets by ensuring that the disks of VMs in an Availability Set are sufficiently isolated from each other to avoid single points of failure. It does this by automatically placing the disks in different storage scale units (stamps). If a stamp fails due to hardware or software failure, only the VM instances with disks on those stamps fail.

Granular access controlYou can use Azure Role-Based Access Control (RBAC) to assign specific permissions for a managed disk to one or more users. Managed Disks exposes a variety of operations, including read, write (create/update), delete, and retrieving a shared access signature (SAS) URI for the disk.

Images
Managed Disks also support creating a managed custom image. You can create an image from your custom VHD in a storage account or directly from a generalized (sys-prepped) VM. This captures in a single image all managed disks associated with a VM, including both the OS and data disks.

For more information also see: Azure Managed Disks Overview And I also found this article a good read: Azure Managed Disks Deep Dive, Lessons Learned and Benefits

Now that we’re familiar with the concept of Azure Managed Disks, let’s see how we can leverage all of this in our ARM Template. If you’ve seen previous articles in this series, you’ll know that the ARM template already had availability sets for each RDS role housing least 2 machines with a load balancer in front. The concept of Availability Sets stays the same when moving to Azure Managed Disks, but we do need to tell ARM that we are housing VM’s with Managed Disks in order to take full advantage.

Previously, the declaration of our Availability Sets looked like below. This example is for the RD Gateway / RD Web Access servers, but we declared a separate Availability Set per Server Role in our previous templates.

{
  "apiVersion": "[variables('apiVersion')]",
  "type": "Microsoft.Compute/availabilitySets",
  "name": "[parameters('availabilitySetNameRDGW')]",
  "comments": "This resources creates an availability set that is used to make the RDGW Server Highly Available",
  "location": "[resourceGroup().location]",
  "tags": {
    "displayName": "RDGW AvailabilitySet",
    "Project": "[parameters('projectTag')]"
  }
},

To tell ARM we want to create an Availability Set that can house Virtual Machines based on Managed Disks, we need to make a few modifications.
{
  "apiVersion": "[variables('apiVersionPreview')]",
  "type": "Microsoft.Compute/availabilitySets",
  "name": "[parameters('availabilitySetNameRDGW')]",
  "comments": "This resources creates an availability set that is used to make the RDGW Server Highly Available",
  "location": "[resourceGroup().location]",
  "tags": {
    "displayName": "RDGW AvailabilitySet",
    "Project": "[parameters('projectTag')]"
  },
  "properties": {
    "platformUpdateDomainCount": 2,
    "platformFaultDomainCount": 2
  },
  "sku": {
    "name": "[variables('sku')]"
  }
},

As you can see a new API version is needed to able to use Managed Disks. This version needs to be “2016-04-30-preview” at this point, and we’ve declared that using the following variable.

"apiVersionPreview": "2016-04-30-preview",

Next we need to specify that this Availability Set will be based on managed disks, to do this we need to provide sku.name with the value “Aligned”.

"sku": "Aligned"

And Finally, we can further define the availability set by specifying an Update Domain Count and Fault Domain Count. What is the concept behind these properties? Here is what Microsoft says about these properties (source)

Update Domains
For a given availability set, five non-user-configurable update domains are assigned by default (Resource Manager deployments can then be increased to provide up to 20 update domains) to indicate groups of virtual machines and underlying physical hardware that can be rebooted at the same time. When more than five virtual machines are configured within a single availability set, the sixth virtual machine is placed into the same update domain as the first virtual machine, the seventh in the same update domain as the second virtual machine, and so on.

Fault Domain
Fault domains define the group of virtual machines that share a common power source and network switch. By default, the virtual machines configured within your availability set are separated across up to three fault domains for Resource Manager deployments (two fault domains for Classic). While placing your virtual machines into an availability set does not protect your application from operating system or application-specific failures, it does limit the impact of potential physical hardware failures, network outages, or power interruptions.

Since this deployment deploys each RDS Role on 2 Virtual Machines we configured 2 Update Domains and 2 Fault domains. When settings these properties consider the number of VM’s you are hosting and the maximum limit of each of these properties.

Now that we have covered the changes needed when specifying the Availability Sets, let’s take a look at the changes needed for defining Virtual Machines that we want to add to these Availability Sets.

Again, the first change that is needed the API Version. Similar to the Availability Set, the VM needs to be configured with API version “2016-04-30-preview”.

  "apiVersion": "[variables('apiVersionPreview')]",
  "type": "Microsoft.Compute/virtualMachines",
  "name": "[concat(parameters('hostNamePrefixRDGW'),'0', copyindex(1))]",
        "comments": "This resources creates VM’s that will host the RDGW/RDWA role",
Next, we need to change the declaration of the osDisk within the Virtual Machines. Before using Manged Disks we had declared this as followed

"osDisk": {
  "name": "[variables('virtualmachineosdisk').diskName]",
  "vhd": {
  "uri": "[concat('http://',variables('storageAccount').name,copyindex(1),'.blob.core.windows.net/vhds/',parameters('hostNamePrefixRDGW'),'0',copyindex(1),'-',variables('virtualmachineosdisk').diskName,'.vhd')]"
  },
  "caching": "[variables('virtualmachineosdisk').cacheOption]",
  "createOption": "[variables('virtualmachineosdisk').createOption]"
}
},

To tell ARM this Virtual Machine needs to use Managed Disks we change to above to the following.
"osDisk": {
  "name": "[concat(parameters('hostNamePrefixRDGW'),'0', copyindex(1),'-',variables('virtualmachineosdisk').diskName)]",
  "managedDisk": {
    "storageAccountType": "[variables('storage').type]"
  },
  "caching": "[variables('virtualmachineosdisk').cacheOption]",
  "createOption": "[variables('virtualmachineosdisk').createOption]"
},

So, instead of defining the storage account and path to the .vhd that needs to be used, we simple introduce the parameter ManagedDisk and specify the Storage Account Type.

In our Variables Section, we’ve declared this variable as followed
"storage": {
  "name": "[concat(uniquestring(resourceGroup().id), 'rdsarm')]",
  "type": "Premium_LRS"
},
The configuration above is applicable to Virtual Machines that we want to base on a Standard Image, in this case Windows Server 2016 Datacenter. You might recall from a previous article in this series, that we’ve been using a Custom Template Image for the RD Session Host Role to allow us to specify a custom image that contains our corporate applications. So, how does that all change with Managed Disks?

You might recall that previously (before Managed Disks) one of the prerequisites of our ARM Script was that a Storage Account needed to pre-exist that contained the Template Image for the RDSH Servers. Using the parameter existingCustomImageRDSH we provided the option to specify the location of the RDSH Custom Template Image.

"existingCustomImageRDSH": {
  "value": "https://tuie2b2tyw23yrdsrdsh1.blob.core.windows.net/...
},

Since we’re now using Managed Disks, there is no more need for a Storage Account housing the RDSH Template. In order to still allow us to specify a Custom Template Image we create a new Resource in Azure called Image.

To move the existing RDSH Template image on the Storage Account to an Image resource, that we can use within the creation of VM with a Managed Disk, I used the following ARM template to create that image. This Template creates a new Image Resource and uses the VHD on the existing storage account to create that image.

{
  "type": "Microsoft.Compute/images",
  "name": "RDSH-RDSG-Template-Image",
  "apiVersion": "2016-04-30-preview",
  "location": "[resourceGroup().location]",
  "properties": {
    "storageProfile": {
      "osDisk": {
        "osType": "windows",
        "osState": "Generalized",
        "blobUri": "https://tuie2b2tyw23yrdsrdsh1.blob.core.windows.net/vhds/RDSH2016OSDisk.vhd",
        "caching": "ReadWrite",
        "storageAccountType": "Premium_LRS"
      }
    }
  }
}

The result is a new resource of type image. Since we no longer need the .VHD inside the storageaccount, we can completely remove that Storage Account. The Image is now the only prerequisite, which makes our lives much easier!

clip_image004_thumb[2]
With the Image in place, let’s take a look at how we can tell ARM to create a new Virtual Machine based on Managed Disks and based on this Image resource shown above.

Previously (before Managed Disks) we declared the storage profile of the RDSH Servers as follows. 
"storageProfile": {
  "osDisk": {
    "name": "[variables('virtualmachineosdisk').diskName]",
    "vhd": {
      "uri": "[concat('http://',parameters('existingStorageAccountNameRDSH'),'.blob.core.windows.net/vhds/',parameters('hostNamePrefixRDSH'),'0',copyindex(1),'-',variables('virtualmachineosdisk').diskName,'.vhd')]"
    },
    "osType": "windows",
    "caching": "[variables('virtualmachineosdisk').cacheOption]",
    "createOption": "[variables('virtualmachineosdisk').createOption]",
    "image": {
      "uri": "[parameters('existingCustomImageRDSH')]"
    }
  }
},

With Managed Disks, below is what needs to be changed. We no longer have to define what storage account the VHD resides on. We can simply specify the resourceID of the Image that we created in the previous step. Also, note that this removes the need for the Template Image to be in the same Storage Account as the to be created Virtual Machine, a huge improvement!!
"storageProfile": {
  "osDisk": {
   "name": "[concat(parameters('hostNamePrefixRDSH'),'0', copyindex(1),'-',variables('virtualmachineosdisk').diskName)]",
   "managedDisk": {
     "storageAccountType": "[variables('storage').type]"
   },
   "osType": "windows",
   "caching": "[variables('virtualmachineosdisk').cacheOption]",
   "createOption": "[variables('virtualmachineosdisk').createOption]"
  },
  "imageReference": {
    "id": "[resourceId('Microsoft.Compute/images', parameters('existingCustomImageNameRDSH'))]"
  },
},
The final step we need to perform is to remove the creation for the Storage Accounts inside out ARM template because we no longer need those.

The end result after running the ARM Template is 3 Availability Sets that are configured to house Virtual Machines with Managed Disks, and 6 Managed Disks resources and 1 Image resource that is our template image for the RDSH Servers.
clip_image006_thumb[4]


This concludes our journey of moving from a Storage Account model towards Azure Managed Disks. In my opinion a great new feature for Azure IaaS that provides a lot more flexibility!

Next up in these series on ARM Templates for Azure IaaS, is grouping variables into complex objects, using Vnet & Subnet coming from a separate resource group, resource comments and projects tags!

Stay tuned!