Azure Portal–Ability to Quick Delete Resources

Very minor, and very finicky but one thing that’s always frustrated me in the Azure Portal is the inability to clear up my test subscriptions without having to resort to PowerShell. Occasionally I’ll spend some time spinning new services up directly in the portal rather than using PowerShell and when I’ve finished I like to clear things down to avoid costs which usually involves just deleting any resource groups I’ve spun up.

Now, the PowerShell to do this is pretty straight forward… can just use Get-AzureRmResourceGroup and Remove-AzureRmResourceGroup with a –force flag and a for-loop if you don’t want to iterate through, so why complain? Smile well sometimes launching a script or PowerShell is just too much which invariably means I’ll leave stuff running and then incur costs.

I’m not sure when this feature change came about (either last week during Ignite, or sooner, or has it been there all along and I’ve missed it?!) – however you can now do a select on all your resources (rather than individually), enter the word delete and then it’ll attempt to remove. This doesn’t always work as sometimes you’ll have nested resources that will block the deletion, however it’s a quick way to delete resources if you don’t want to resort to PowerShell.

image

Advertisements

A quick look at Instant File Restore in an Azure VM…

Instant File Restore directly from an Azure VM is a feature release that caught my eye recently. I remember being excited in the early days with on-premises virtualisation when backup companies, e.g. Veeam introduced the ability to mount VM backups and access the file system directly, thus allowing a quick restore and always thought it was a handy feature. Granted, it does not (always) replace proper In-VM backups of the application or service however it does provide a quick and easy way to restore a file/config, etc.

The feature went GA a couple of days ago, however the portal still has it as in-preview. More info can be found on the Azure Blog Post:

https://azure.microsoft.com/en-us/blog/instant-file-recovery-from-azure-vm-backups-is-now-generally-available/

To start with you need an Azure Virtual Machine and some files you’d like to protect. I’ve created a simple VM from the marketplace running 2016 Datacentre. I created some test files on one of the disks:

1

You’ll then need to log into the Azure Portal, and you’ll need to have a Recovery Vault already configured with the virtual machine you want to protect added. The following screenshot shows the virtual machine ‘azurebackuptest’ added to the Recovery Vault:

1

If you do not have your machine added, then use the ‘Add’ button, and choose a backup policy. You now need to ensure you have a ‘restore point’ from which you can recover.

I’m now going to go back to my virtual machine and ‘permanently delete’ some files (so they’re gone from the recycle bin, too), as  you can see from the following screenshot:

2

We now have a virtual machine with missing files that’d we’d ordinarily need to recover from an In-VM backup agent – however we’ll use the File Recovery tools. Navigate to the vault again and choose ‘File Recovery (preview)’:

4

From here you need to select your recovery point, and download the executable.. it can take a minute or two to generate. Once you’ve downloaded the .exe, ‘unmount disks’ will flag up:

6

Now simply run the .exe file on the  machine where you’d like the disks to be mounted so you can copy them off. The exe will launch PowerShell and mount the disks as iSCSI targets (disconnecting any previous connections if you have any):

7

You can now browse the mounted disk and recover the files that were deleted earlier:

8

Once complete, remember to kill the PowerShell session as indicated in the previous screenshot, and don’t forget to click ‘unmount’ disks from the Azure Console:

9

That’s it! A straight forward feature but one that can be very handy occasionally and starts to bring even more parity between Azure and equivalent on-premises capabilities.

Servicing in a Cloud World

To kick things off on this rather derelict looking blog ;-), I wanted to start with a topic that I’ve been discussing with several customers recently, namely ‘Servicing in a Cloud World’. Why? Because it’s super important and requires a step change in the way IT organisations react to change, and communicate/engage this with their end-users.

Widespread adoption of Cloud technologies is almost old news now, organisations have been, and are currently, actively migrating services to a variety of cloud models. This includes anything from IaaS to PaaS to SaaS and they are therefore handing responsibilities over to the provider on a sliding scale. The most extreme end of this scale is any SaaS based solution as the provider not only manages/administers the solution for you, but also owns the roadmap which includes both feature additions and deprecations.

The following diagram taken from the “Change Management for Office 365 clients” guide by Microsoft, illustrates the difference between a traditional release model vs a servicing release model:

clip_image001

In a traditional release model with infrastructure/platform or services managed and administered by the organisation, typically releases happen after several months of planning, developing and testing. The actual “release” is then usually governed through change/release management processes which would generally involve some form of impact assessment as well as a mechanism by which to alert the target user base that the release is coming. This may then be supported by engagement through training / user guides, etc.

clip_image002

In a servicing release model where the infrastructure/platform or service is managed and administered by the provider, releases typically happen much faster and much more aggressively. This is due to the provider being incentivised (generally through a PAYG subscription approach) to be innovative on the platform to retain or attract new custom. As illustrated in the figure, this means the each stage of the lifecycle of the release is generally much smaller. This model is advantageous for organisations as it means they can leverage capability much sooner rather than having to invest in the planning/development and testing themselves. A well-understood benefit of Cloud, right?!

Onto the point of this post; organisations typically have well-defined and robust change and release management processes grown through many years of managing and delivering services out to their userbase (as discussed earlier in reference to the traditional release model). They are experts at managing changes and releases that are under their own control. However, it is crucial that organisations adapt these processes in reaction to the “servicing world”. These include gaining a full understanding of the providers roadmap in the following areas:

  • Minor / Major Changes
  • Feature additions
  • Feature modifications
  • Feature deprecations
  • New product releases (in the same portfolio)
  • Servicing (which can include downtime)
  • Maintenance

As noted earlier, these releases are typically much more frequent than previous, and will require roles and responsibilities across the IT org to adapt – making sure that they not only focus on the actual release but to understand how this will impact end users as well, who will likely require support as the frequency of change and adoption of new features/technologies will be unfamiliar.

To provide some support to this post, Office 365 Message Centre averages about 12-15 “notifications” per week (note, this changes frequently!) – anything from a new product, a feature change or a service announcement. Ultimately, as more services move to Cloud – the roles and responsibilities within IT need to adapt accordingly as IT professionals work to support their organisations operating in a cloud servicing world.