Visibility and increasing your Azure Subscription limits and quotas

Azure contains a number of default subscription limits that should be considered as part of any Azure design phase. Majority of these limits are soft and you can raise a support case to have them raised. To avoid delays to any project I always recommend this is one of the first areas of consultation as there can be a lag between the request and it being actioned. The limitations are there for a number of reasons, primarily to help Microsoft control and capacity plan the environment whilst also ensuring usage is limited to protect the platform.

There is a very handy  view in the Azure Portal from which you can view by going to the new, preview “Cost Management + Billing” blade. From here go to “Subscriptions” and click the subscription you would like to view. From here you get some great statistics, e.g. spending rate, credit remaining, forecast, etc. all of which holds great value to assist you in planning and controlling your cost. If you navigate to “Usage and quotas” under the “Settings” menu you will see the following view:

image

There are other ways to view your usage quotas via PowerShell, however these are individual commands per resource. It wouldn’t take much to create a quick script that you can run regularly to expose this however the above view deals with it nicely I think. One of the example commands include “Get-AzureRmVmUsage –location <location>”

image

The bottom part of the page shows your current usage against your limits. It is now also very easy to “Request an Increase” – if you click this link you will be taken through a guided wizard to increase your limits.

image

Bear in mind this is resource and type specific, e.g. you will be asked to demonstrate the model you are using (Classic/ARM) , the impact on your business so they can set a priority, the location you need a request in and also the SKU family.

image

The final screen asks you to supply contact details and who should be notified throughout the ticket.

image

.. and there you go! – limits raised. Evidently this can take some time and it asks you to be pretty specific about what you are requesting therefore I highly recommend you take time to plan your deployments correctly to avoid any frustration.

Advertisements

Azure Portal–Ability to Quick Delete Resources

Very minor, and very finicky but one thing that’s always frustrated me in the Azure Portal is the inability to clear up my test subscriptions without having to resort to PowerShell. Occasionally I’ll spend some time spinning new services up directly in the portal rather than using PowerShell and when I’ve finished I like to clear things down to avoid costs which usually involves just deleting any resource groups I’ve spun up.

Now, the PowerShell to do this is pretty straight forward… can just use Get-AzureRmResourceGroup and Remove-AzureRmResourceGroup with a –force flag and a for-loop if you don’t want to iterate through, so why complain? Smile well sometimes launching a script or PowerShell is just too much which invariably means I’ll leave stuff running and then incur costs.

I’m not sure when this feature change came about (either last week during Ignite, or sooner, or has it been there all along and I’ve missed it?!) – however you can now do a select on all your resources (rather than individually), enter the word delete and then it’ll attempt to remove. This doesn’t always work as sometimes you’ll have nested resources that will block the deletion, however it’s a quick way to delete resources if you don’t want to resort to PowerShell.

image

A quick look at Instant File Restore in an Azure VM…

Instant File Restore directly from an Azure VM is a feature release that caught my eye recently. I remember being excited in the early days with on-premises virtualisation when backup companies, e.g. Veeam introduced the ability to mount VM backups and access the file system directly, thus allowing a quick restore and always thought it was a handy feature. Granted, it does not (always) replace proper In-VM backups of the application or service however it does provide a quick and easy way to restore a file/config, etc.

The feature went GA a couple of days ago, however the portal still has it as in-preview. More info can be found on the Azure Blog Post:

https://azure.microsoft.com/en-us/blog/instant-file-recovery-from-azure-vm-backups-is-now-generally-available/

To start with you need an Azure Virtual Machine and some files you’d like to protect. I’ve created a simple VM from the marketplace running 2016 Datacentre. I created some test files on one of the disks:

1

You’ll then need to log into the Azure Portal, and you’ll need to have a Recovery Vault already configured with the virtual machine you want to protect added. The following screenshot shows the virtual machine ‘azurebackuptest’ added to the Recovery Vault:

1

If you do not have your machine added, then use the ‘Add’ button, and choose a backup policy. You now need to ensure you have a ‘restore point’ from which you can recover.

I’m now going to go back to my virtual machine and ‘permanently delete’ some files (so they’re gone from the recycle bin, too), as  you can see from the following screenshot:

2

We now have a virtual machine with missing files that’d we’d ordinarily need to recover from an In-VM backup agent – however we’ll use the File Recovery tools. Navigate to the vault again and choose ‘File Recovery (preview)’:

4

From here you need to select your recovery point, and download the executable.. it can take a minute or two to generate. Once you’ve downloaded the .exe, ‘unmount disks’ will flag up:

6

Now simply run the .exe file on the  machine where you’d like the disks to be mounted so you can copy them off. The exe will launch PowerShell and mount the disks as iSCSI targets (disconnecting any previous connections if you have any):

7

You can now browse the mounted disk and recover the files that were deleted earlier:

8

Once complete, remember to kill the PowerShell session as indicated in the previous screenshot, and don’t forget to click ‘unmount’ disks from the Azure Console:

9

That’s it! A straight forward feature but one that can be very handy occasionally and starts to bring even more parity between Azure and equivalent on-premises capabilities.

Protecting the keys to your cloud

A hot topic when migrating to Cloud providers is Security; where is data stored, how is it stored, how is it accessed, who can access it, all of the standard questions, etc. However, there is key theme that runs through all of these questions – it is focused directly on the cloud provider and misses arguably one of the biggest attack vectors; service administrators aka those with “keys to the kingdom”!

key 
Working with clients over the last few years, I have been involved in several conversations surrounding securing admin access to cloud providers. Whether the end service is a SaaS based offering, e.g. Office 365 or an application built upon a subset of PaaS/IaaS services in Azure it is key to think about how you are going to provide access to IT staff as well as end-users so you can successfully manage and maintain those services on an on-going basis. 

The vast majority of organisations have a variety of roles within their IT department, e.g. Service Desk, Roaming Support, Network Engineers, Server Engineers, Architects, etc. Each of these have specific activities and functions that need to be performed to fulfil their role – from unlocking accounts, to creating new users or changing policies/settings.

For a long time organisations have been seeking the holy grail of RBAC (it’s not all negative, many succeed!) – however it is a difficult journey. From an on-premises Microsoft perspective, many organisations still have a legacy of over-provisioned “privileged accounts”, e.g. Domain/Enterprise Administrators and these are usually allocated directly to a “user account” which rarely has any “controls”, e.g. two-factor authentication, logging or time-bound access configured.

The lack of proper controls over privileged accounts is a serious attack vector and is even more so critical when moving to cloud. Access to administrative consoles or portals for on-premises systems is often tightly controlled through perimeter protection and they are rarely publicly exposed. With Cloud this is flip-reversed with portals primarily internet facing, e.g. portal.office.com or portal.azure.com. It is therefore crucial that you secure the identity used to access administrative features appropriately, through Role Based Access Controls (RBAC) such as “Privileged Identity Management (PIM)”, “Time-Bound Access”, “Multi-Factor Authentication” and allocation of specific roles as opposed to high permission accounts (e.g. Global Administrator). RBAC controls are a topic in their own right and there are many good articles on the web discussing what to consider, and how to achieve this.

As the journey to the cloud is still in its in infancy (but progressing at speed!), I would heavily recommend time is invested in getting your RBAC model correct before the legacy situation discussed earlier becomes a reality again. A few key areas you should consider on this journey include:

  • Develop a “Logical Model”. This should provide a logical view of your organisation, the standard users, operational teams, third party engineers, partners/consultants and contractors. Ultimately this should cover any user interacting with any system, regardless of permission or right. This activity may require a high degree of business analysis to understand the environment
  • Detail the roles that exist within each of the above categories, e.g. for each operational team there may be several roles that exist underneath that, there will be different roles that exist within your standard users (potentially departmental) and there may be a several types of contractor you employ
  • Understand the controls you want to apply, for example – is there any activation constraints to elevate your permissions (change request), how are you apply the access/permission (PIM), is it a time/bound permission, does the access need to be witnessed, will the account perform a system operation, does it require auditing and/or does the session need to be recorded
  • Design your account strategy, for example – what type of accounts are you creating, are you using standard user accounts or creating specialist ADM accounts? Are specialist system accounts used and authorised through PIM, are you apply MFA to these accounts?
  • Map out your “Physical Model”. This should include all available systems, technologies, etc. to whatever granularity you are trying to achieve. Some systems contain very in-depth RBAC controls, e.g. Office 365, Azure, System Center, etc. (to relate to Microsoft technologies) – examples for Office 365 can be found here: https://support.office.com/en-gb/article/About-Office-365-admin-roles-da585eea-f576-4f55-a1e0-87090b6aaa9d

Most importantly, an end-to-end RBAC model may take some time to achieve. It is important that you fully understand to what degree of granularity and control you want to get to prior to embarking on a project of this nature. My personal recommendation is that organisations should take a light-touch approach initially, – developing the logical model, implementing some controls and mapping these to your physical model/systems that contain built-in RBAC roles. Some of the more advanced systems that require in-depth analysis to create “roles” can be left until you have a more mature model. As this topic primarily revolves around Cloud systems, it is good news to hear that majority of the major providers operate good standards with regards to administrative and end-user based roles which you can easily map your logical model against.

Some links for further reading: