Auto remover of resources in Azure with Policy and Automation Account


If you are in the same situation as I was this can help you and your company to clean up old resources.
We have some subscriptions where the whole company can try out stuff and just play with Azure.
This kind of subscriptions is for labing and testing and the resources should be short lived.

Sadly, this kind of resources often gets forgotten and just stays there and cost money for the company and no one knows if this kind of resources can be deleted.


  • Just deploy the policy and add the RBAC for the Automation account on subscriptions where you know nothing of importance is running.

All the code for this auto cleaner can be found in this github-repo: Auto-Cleaner
It is built out of two components:

  • A policy that adds a DeletionDate tag with plus 30 days to new resource groups.
    And adds the tag when an update is performed on a resource group with 30 days from current date to the tag. (It refreshes the date on the tag DeletionDate)
  • Automation account that deletes resource groups with the tag DeletionDate on current day.

The Policy

The policy has two parameters:
tagName = defaultValue: DeletionDate (string)
tagValue = defaultValue: 30 (Integer)

The policy is of effect modify. This is because we then get the possibility to remediate old resource groups, so they also get the tag and ultimately gets deleted.


  1. Open the Azure portal and navigate to Policy > Definitions. Click + Policy definition
  2. Chose a location (Example under a management group or subscription)
  3. Name the policy whatever you prefer.
  4. add a Description.
  5. Copy the content of file Copypolicy.json to POLICY RULE and Save
    If you need more time the 30 days change the value of ("defaultValue": 30) to your liking
  6. Now we need to assign the policy so click Assign
  7. Scope the assignment to a Management Group or Subscription.
  8. Give the assignment a name and Description and Enable it.
  9. Under Remediation chose System assigned managed identity or create a User assigned managed identity.
  10. Click Review + create > Create
  11. You can test the policy by just creating a new resource group where you have scoped the policy assignment and see if it automatically adds the tag. If it does, then everything is working as expected.
  12. Now if you want to create a Remediation task do it to add the tag to old resource groups.

The Automation Account

The automation account has one runbook that is scheduled to run at 01.00 AM every day.
You can deploy everything as it is from the repo described below but you need to manually add the powershell scrip to the runbook.
Or change runbookType: 'PowerShell7' to runbookType: 'PowerShell' in file automationAccount.bicep and just use the script import.ps1 in folder extra-config. (you need to uncomment (# $scriptPath and # Import-AzAutomationRunbook)
Everything works fine with both PowerShell and PowerShell7 but you get more output/info from PowerShell7 so I would recommend it.

resource automation_account_auto_remove_runbook 'Microsoft.Automation/automationAccounts/runbooks@2019-06-01' = {
  parent: automation_account
  location: location
  name: 'auto-clean-resources'
  properties: {
    logActivityTrace: 0
    logProgress: true
    logVerbose: true
    runbookType: 'PowerShell7' // here you can go with PowerShell and use the import.ps1 script. powershell7 gives more info from the runbook but then you need to add the script manually.


The account or the service principal that deploys the automation account need the following RBAC roles on the scope of the deployment.
For example, Contributor and User Access Administrator on the management group where you deploy the automation account.

  • Contributor
  • User Access Administrator


  1. Copy everything either the hole github-repo Auto-Cleaner or everything under bicep-deploy
  2. Open the copied files/folder in visual studio code. It should look like this picture below.
  3. Create a new repo and add a github action or just deploy it locally.
  4. Deploy everything with commands like this:
    cd .\bicep-deploy\
    az login
    az deployment mg what-if --management-group-id yourManagementGroupId --name rollout -f .\main.bicep -l westeurope
    az deployment mg create --management-group-id yourManagementGroupId --name rollout -f .\main.bicep -l westeurope
  5. Add the auto-cleanup.ps1 script which is located in extra-config to the new runbook auto-clean-resources as seen in the picture below and Save.
  6. Or run the script import.ps1 from folder extra-config or publish the runbook and add the schedule manually. It should look like this when its finished.
  7. If you want to try it out create a new resource group and change the date on the tag DeletionDate to current date.
  8. Start the runbook. The output should look like this.


In extra-config under policy there is an example on how you can deploy the policy just with code: policy
It is also here we find the runbook script: auto-cleanup.ps1

Protect resources from auto deletion

If for some reason you need to protect a resource group, just add more time to the tag as shown in the picture below.


SCEPman Bicep deployment

SCEPman is a slim and resource-friendly solution to issue and validate certificates using SCEP. It is an Azure Web App providing the SCEP protocol and works directly with the Microsoft Graph and Intune API.
I have used SCEPman together with RADIUS-as-a-Service for our offices to log on to our WIFI automatically. There are much more ways you can use SCEPman take a look at SCEPman docs.

This blog post is about the infrastructure, architecture and deployment of SCEPman with bicep.
As most companies now use infrastructure as code (IaC) and SCEPman just gives us the starting point I wanted to share my deployment of SCEPman with bicep.

So, to get started SCEPman have a very smooth first deployment which sets everything up and also adds a resource that I can’t find a way to deploy with bicep (pid)

Basic deployment

More advanced deployment nr2

This deployment will include alarm, Application Insights and auto scale on the app service plan.

Do the same as in the first deployment but change to bicep-deploy-2

Deploy your SCEPman instance from:

Use my bicep code bicep-deploy-2 and change the necessary parameters so they match your company you can find the code here:

Run the bicep deployment to add your tags and to make sure that it works.

Open the SCEPman webapp and run the powershell script or runt it with the github action (more advanced and optional) deploy-powershell (powershell.yml) and change the scepman.ps1 script you find in extra-config from my github repo.

After you have run the scepman script a few new webapp settings have been created se the portal. Add the settings to your bicep code both for the webapp and for the webapp-certificatemaster.


Visual studio code:

Geo-redundancy and even more advanced deployment nr3

This deployment adds Traffic Manager, (certificate for https) deployment slots and update strategy.


  • HTTPS Certificate
  • Have your own domain (So you can add DNS records)
  • SCEPman licance

First: Determine what hostname/dns your SCEPman instance will have. mine had (
Second: Buy a certificate for your SCEPman instance
Recommendation: Buy the certificate through Azure App Service Certificates so you can have everything in code plus certificate auto renewal.

When the prerequisites are done do the same as in the first deployments but change to bicep-deploy-3.0
Use my bicep code bicep-deploy-3.0 and change the necessary parameters so they match your company you can find the code here:
After running bicep-deploy-3.0 This will add key vault access policy for “Microsoft Azure App Service” so you can import/create your certificate for SCEPman.

Now move on to bicep-deploy-3.1 and deploy everything.
Use my bicep code bicep-deploy-3.1 and change the necessary parameters so they match your company you can find the code here: SCEPman/bicep-deploy-3.1 at main · marfha88/SCEPman (
The deployment will show some errors, and this is because you need to verify your domain for both the web apps.

  • Verify your domains with DNS records (you find the info in the portal under App Service\Custom domains)
  • Run the deployment again.
  • Now Run the powershell script that the webapp will show. (If the script creates a new certificate web master webapp delete it)
  • Add all the app service Application settings in bicep.

For update strategy follow the SCEPman docs.

  • Download the the artifact from SCEPman:
  • Add the artifact to a storage account and point the web apps to that artifact as in bicep code bellow,
    WEBSITE_RUN_FROM_PACKAGE: 'https://${storageAccountName}.blob.${environment()}/scepman-artifacts/'
  • The App services needs Storage Table Data Contributor role on the storage account
  • point the deployment slots to SCEPmans own artifact in bicep as bellow
    WEBSITE_RUN_FROM_PACKAGE: ArtifactsLocationSCEPman
  • You can find bicep for RBAC in extra-config

Hope this will help you in your deployment of SCEPman
When I started out for our company there were a lot of small things, I needed to figure out regarding how to deploy the bicep code.
So even if you don’t need to deploy SCEPman there might be some tips and tricks in the bicep code that you can use ☺️!

Github Action with Bicep

When deploying infrastructure as code I would absolutely recommend deploying it directly from github and that’s where the github action comes in to play.
Github action is basically deploying your code and this article is about how you can create a github action and deploy your (in this article bicep code).

I will use OpenID to connect to Azure you can read more about it here:

  1. So, to start go to Azure Active Directory, App registrations and klick New registration
  2. Give it a meaningful name (for this example i will use bicep-scepman-sp)
  3. Go to Certificates & secrets, Federated credentials
  4. Add at least 2 federated credentials with federated credential scenario: Github Actions deploying Azure resources.
    • Create the first credential as the following example.
      If you are in a github organization, write down that organization.
    • Create the second credential as the following example.
  5. Now we need to add the RBAC roles that the new application registration/service principal needs to deploy your code/infrastructure. always try to use the principle of least access.
    But in this example, I added.
    • Contributor
    • User Access Administrator
  6. Now that you have your application registration/service principal navigate to your github repo and open Settings, Secrets, Actions and add those 3 secrets:
    • AZURE_CLIENT_ID = Your application registration (Application (client) ID).
    • AZURE_SUBSCRIPTION_ID = Subscription where you’re going to deploy.
    • AZURE_TENANT_ID = Your Azure tenant id.
  7. Now in your repo create the following folder structure: .github/workflows/
    You can see how it should look at repo:
  8. Copy the bicep-deploy.yml to your repo and place it in folder .github/workflows/ (as it does in this repo)
  9. Open the file bicep-deploy.yml and change
    • paths to your path
    • az deployment group what-if -g (yourrg) –name rollout-$deploytime -f (to where you have located your file/files)
    • az deployment group Create -g (yourrg) –name rollout-$deploytime -f (to where you have located your file/files)
    • Change to your resource group (-g)
  10. The lines you need to change on bicep-deploy.yml:
    • 10 – bicep-deploy-1
    • 16 – bicep-deploy-1
    • 48 – resource group and bicep-deploy-1
    • 57 – resource group and bicep-deploy-1
# This is a basic workflow to help you get started with github Actions

name: bicep-deploy

# Controls when the workflow will run
     - 'bicep-deploy-1/**' ## Change this to your deployment where your files is located
    types: [opened, reopened, edited, synchronize]
      - 'main'
     - 'bicep-deploy-1/**' ## Change this to your deployment where your files is located
      - 'main'

      id-token: write
      contents: read

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
  # This workflow contains a single job called "Bicep-Whatif-OR-Create"
    # The type of runner that the job will run on
    runs-on: ubuntu-latest
      - uses: actions/checkout@v2      
      - name: Azure Login
        uses: azure/login@v1
          client-id: ${{ secrets.AZURE_CLIENT_ID }}
          tenant-id: ${{ secrets.AZURE_TENANT_ID }}
          subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
      # Start the Bicep validation      
      - name: Azure Bicep validate what-if
        if: ${{ github.event_name == 'pull_request' || github.event_name == 'workflow_dispatch'  }} # Here we run the what-if on a Pull request or manualy from Github Action
        uses: azure/CLI@v1
          azcliversion: latest
          inlineScript: |
            az bicep install
            deploytime=$(date +"%m-%d-%y-%H")
            az deployment group what-if -g yourrg --name rollout-$deploytime -f bicep-deploy-1/main.bicep
      - name: Azure Bicep Create via azcli
        id: scepmanbicepdeploy
        if: ${{ github.event_name == 'push' }} # Here we run the bicep create when we push the code to the main branch.
        uses: azure/CLI@v1
          azcliversion: latest
          inlineScript: |
            deploytime=$(date +"%m-%d-%y-%H")
            az deployment group Create -g yourrg --name rollout-$deploytime -f bicep-deploy-1/main.bicep
# You need to change to your resource group (yourrg) and bicep-deploy-1, bicep-deploy-2 or to bicep-deploy-1 depending on your deployment.
# Or just create your own deployment and copy what you need from this repo.

Now you have a simple Github Action that deploys on a resource group scope.
You can of course scope it to:
az deployment sub create – for subscription
az deployment mg create – for management group

When you deploy new code/future always create a new branch and create a pull request with your new code. This will trigger a what-if action and a confirmation from the pull request when it has run you should confirm the result and confirm the pull request.
When you confirm a bicep create will run and the code will be pushed to the main branch.


Move cloud shell storage

Background info

If you have the same problem as we had with cloud shell storage account scattered all over our azure platform.
This post is about how you can move your cloud shell storage account to a new storage account.
I would recommend that you guys keep it to one or two resource groups to not end up with the same problem anyway.
We can also create a policy with denies the creation of storage account with the automated tag that gets added to the a cloud shell storage account “ms-resource-usage:azure-cloud-shell”

Move the files

Some users might have saved files thus as picture, script, connection strings and so on.
If a user wants to cope the whole cloud shell, drive this is how you could do it.
If a user don’t have anything that he or she wants to save simply follow steps nr 3-4 and 13 (delete the old storage account)

Copy paste cloud shell

  1. Copy the name of the storage account to notepad or something else, you will need it later on.
  2. Open cloud shell select bash and write: df

    The storage account name will be seen as showed in the picture above followed by the file share name.
  3. In the cloud shell write clouddrive unmount and say y as shown in the picture.
  4. Now create a new cloud shell/Storage account in the subscription of you chose and the same for resource group. Name your new Storage account and fileshare to something meaningful. Example: cloudshellyourname
  5. Now verify that the new storage account and file share is created.
  6. When you have verified that the storage account and file share exist, from cloud shell now run clouddrive unmount and say y as shown in the picture shows above again.
  7. In the Azure portal navigate to the old Storage account and open: Open in explorer (if you dont have the application download and install it)
  8. Login to Azure Storage Explorer and navigate to your old Storage account > file share > image and copy the image file.
  9. Now navigate to your new Storage account > file share > image and paste the copied image file.
  10. Replace the image file and Apply to All Conflicts
  11. Now from the portal open cloud shell and attach the newly created storage account
  12. Verify that the files are located in your new storage account/cloud shell. example command dir
  13. Delete the old storage account.


Disaster recovery with Runbooks the easy way

I recently came across a problem that we have with our VMs that is protected through a Recovery Services vault. Our VMs is protected but there are a lot of manual steps that we need to do in the case of a disaster witch would be if our prime region goes down and our VMs gets replicated to another region.

For us this was to assign 3 ASG to the VMs NICs and if we did not do this they would not be able to preform what they are supposed to do because they lack network connection.

So I started to look in to the problem and found that within Recovery Services vault there is a function that can run runbooks from an automation account: Recovery Plans (Site Recovery) all right nice I thought.

So I created a new automation account and located it in the region where the VMs failover to I added the script/runbook that I have tested from PowerShell on my computer towards a test-failover VMs and the script added 3 ASG.
On the automation account I enabled System assigned managed identity and gave that System assigned managed identity the RBAC it needed to execute the script and it worked.
I also added a Connection in the automation account with the type Azure, give it a name, AutomationCertificateName, and add the subscription.

Back to the Recovery Services vault and navigated to Recovery Plans (Site Recovery) and created a recovery plan and add a step for me it is a Post-step and select script. Give the step a name chose automation account and chose runbook.

Now for the Recovery Services vault needs System assigned managed identity to be able to execute the script/runbook and also the same RBACs as the automation account plus
Contributor on the storage account that cache the replication.
Storage Blob Data Contributor on the storage account that cache the replication.

When all the RBAC is in place you can try a test failover from your Recovery Plans (Site Recovery).

Hopefully this works as as good as it worked for me.

Bicep code and Powershell script can be found here: