Creating an Azure DevOps Multi-Stage Pipeline Pt 2

Share on facebook
Share on twitter
Share on linkedin
Share on email

Part 2 in a multi part series on Azure DevOps pipelines

Take a look at Part 1 and Part 3

Welcome to the second post in the series Creating a Multi-Stage Pipeline in Azure DevOps! In the last post we started creating a yaml based pipeline and set up the build. From the end of the last post there are two paths that can be taken to start deploying code – the Releases UI in Azure DevOps or continuing to add stages to the yaml. Both are viable options and we have been using Releases since its creation, however, in this post we are focusing on keeping the pipeline in code.

At the end of this post we will have the packaged code created from the build deployed to two different app services (we will call them Staging and Production) and appropriate dependencies between stages. Additionally, we will set a pre-deployment approval check before deploying to the Production infrastructure.

Side note: During the writing of this post, multi-stage YAML pipelines have been made generally available!

Requirements to follow along

  • Azure Subscription – Sign up for a free account
  • Azure DevOps Account – Sign up for a free account
  • Repository – Any Git repository can be used and connected to Azure Pipelines but this walkthrough will utilize Azure Repos Git repository
  • IDE – This walkthrough was created using Visual Studio Code which has extensions for Pipeline syntax highlighting

Base project

We will be continuing with the .Net Core API project and pipeline started in the last series. You can follow along in the first post and then pick up from there or grab the code from the branch ‘post1-build’ as a starting point for this post (  It is not necessary to have previous knowledge of .Net Core for this walkthrough; the concepts of creating the Pipeline are universal between all supported languages.

Planned Outline

This is the tentative list of planned posts in the series. Links and list will be updated as posts are published.

  1. Intro and setting up the build steps
  2. Deployment steps, environments and approvals (this post)
  3. Pipeline templates 

Preparation – Azure Infrastructure and Azure DevOps Service Connection

In order to deploy the code we will need a place to host it. For this post we will be using Azure App Services. There is a free tier for App Service Plans so no cost will be accrued for this walkthrough.

There are multiple ways to get these resources set up so go ahead and use your preferred method. I’ll outline a few steps to get them set up in Visual Studio Code. The resources we need are an App Service Plan with two App Services (one for staging and one for production).

Create Azure Resources in Visual Studio Code

  • Install the Azure App Service extension (
  • Hit the ‘F1’ key and do a search for ‘Azure App Service create’
  • Select ‘Azure App Service: Create New Web App (Advanced)’
  • Sign into your Azure account
  • Follow the steps to create an App Service for the staging environment
    • Environment OS must be Windows
    • The App Service Plan can be the Free tier
  • Once completed search and select ‘Azure App Service: Create New Web App (Advanced)’ again
  • Follow the steps to create an App Service for the production environment
    • Use the resource group previously created
    • Use the App Service Plan previously created

Azure DevOps Service Connection

One additional setup piece that needs to happen is to create a Service Connection in Azure DevOps to your Azure account.

There are automatic and manual options to set this up. Below are quick instructions for an automatic setup if you have the appropriate permissions in Azure and Azure DevOps, otherwise you can follow these instructions to set it up manually –

  • In the Project Settings select ‘Service connections’
  • Create a new service Connection
  • Select ‘Azure Resource Manager’
  • Select ‘Service principal (automatic)’ for the Authentication method
  • Select appropriate Subscription and fill out details
  • Make sure ‘Grant access permission to all pipelines’ is selected and Save

Pipeline – First Look at Deployment Stage

Phew, now with that setup out of the way we can get back to setting up the Pipeline! Our first priority is getting the code to the staging instance. Before adding new code let’s refresh on what looks like currently – take a look at the gist in GitHub:

A pipeline is a collection of stages. Stages can run sequentially or in parallel depending on how you set dependencies up (more on dependencies later). Jobs in a stage all run in parallel and tasks within a job run sequentially.

Running jobs in parallel
The applications we work on at MercuryWorks all have functional tests and infrastructure as code which need their own package of files to be sent to the Release. In the build stage we end up having three different jobs – one to build and create the application artifact, one to build and create the functional test artifact, and one to create the infrastructure artifact. They all run in parallel which reduces the overall time to complete the stage.

Right now, we only have one stage for the build with the last step creating an artifact of the built code. The tasks to deploy this code to the staging infrastructure will be in a separate stage (I guess technically everything could be in one stage but that would be pretty overwhelming to try to understand and debug).

This stage will have a few new concepts compared to the build. Let’s take a look at what the stage looks like – don’t panic – we will walk through all of the new settings. Here’s the next gist:

  • deployment (line 8) – The first major difference from the build stage is instead of a job listed under jobs it is instead named deployment. This is a specially named job that allows for additional options than a standard job type including deployment history and deployment strategies.
  • environment (line 12) – A bit further down there is a property named environment. This is set to ‘Staging’ because that is what we are naming this environment and in the deployment stage to the production instance it will be named ‘Production’. These environments can be named according to your own environment naming strategy. We will be going over Environments and what setting this property allows us to do.
  • strategy (line 13) – The strategy section has a variety of life cycle hooks (they are special named jobs) that can be used in different deployment strategies. You can find a description of all available options here – For this walkthrough we are using the simplest strategy of RunOnce. In RunOnce, each of the life cycle hooks are executed once and then depending on the result an on: success or on: failure hook is run. Our application is very simple so we only use the deploy hook.
  • steps (line 16) – Each life cycle hook has their own set of steps to execute. At this point things should look familiar outside of the specific tasks being used. First we want to extract the files from the zip that was created in the build, then the files will be deployed to an Azure App Service. We are deploying a .Net Core application here but they deploy task can be also be used for applications built in PHP, Node.js and a few other languages.


Reviewing the task you should notice that the package locations in the extract files task and the package in the deploy step are not filled in yet. In the last post we set up the build which created an artifact that needs to be referenced here. Let’s add three more lines and fill in the package location details.

dependsOn (line 7) – This is an array of stages that this stage should verify have successfully completed before running. Using this array on each stage will help arrange the pipeline to run exactly in the order you need. The deployment stage just added should not run before, or in parallel with the Build stage because it needs the artifact created. Note that this needs to match the name set to the stage: property, not the display name.

download (line 18-19) – This is a special named task that will download artifacts created from previous stages. It is noted that we want artifacts from the current context – the run that is currently happening, not a previous run. The artifact specified to download is the one created in the Build stage (it was named ‘app’).

archiveFilePatterns/destinationFolder (line 27 – 28) – Now we can tell this task where to find the zip file. The location where artifacts are downloaded to is contained in the variable $(Pipeline.Workspace). The folder structure was defined in the build and we can refresh our memory of it by reviewing the artifacts created from the last build. I generally like to extract files to a new directory so we specified a files folder.

Package (line 32) – The Dot Net Core publish task put all of the files inside a folder named the same as the project which is why there is the extra folder inside the files folder here. To check the exact file structure of the zip file that was created, the artifact can be downloaded from the above view.

Deploy to Staging

There are still a couple things to walkthrough but the pipeline is at a point now where we can test it out. Here is what the full pipeline should look like now. Let’s commit the updates and watch it run. Here’s a gist:

Checking on the build, there are some UI changes now that the second stage has been added.

Clicking into the pipeline it now shows both stages. Notice the ‘Build’ stage which indicates that it has 1 job (0/1 completed as it is currently running). Within the stage is the Application Build job. If there were more jobs within the stage they would be listed here.

If you do not see the job list, hover over the stage and click on the up/down arrow symbol that will show up in the top right corner of the box. Clicking into a job will give a further break down of each task and logs.

Once the pipeline has completed head on over to your site! The endpoint for this will be This sample application has no endpoint at the root level.

Production Environment Deployment

The final stage needed in the pipeline is to deploy to the production App Service that was created. It will be pretty similar to the previous stage we created with a couple exceptions:

Make sure that the stage and job names are all updated to indicate they are for Production as well as the name of the web app being deployed to.

One place I want to point out is the dependsOn section. In this stage it has been updated to indicate a dependency on the build stage – because it needs the artifacts, as well as the Staging stage. We don’t want production being released before (or even at the same time as staging).

For a quick demonstration, this is what the pipeline would look like in Azure DevOps if the Production stage only had a dependency on the Build stage (dependsOn: [‘Build_Stage’]).

Notice that the dependency lines show that both Staging and Production will run at the same time afte the Build stage has completed? Instead, let’s make sure that the Production stage has all of the proper dependencies and commit the code. Here’s a gist:

Congratulations! Your application has been deployed to all environments.


Before we celebrate too much there is one last thing we need to do. If you watched the pipeline run you would have noticed that the Production stage immediately ran after the Staging stage. While some projects may be able to do that with an appropriate number of tests, most of the time we prefer to have an approval step in between stages.

We use the Staging environment as a way to demo new functionality to clients and like to have a bit more planning around when new code is deployed.

This is where Environments come in – we had touched on it briefly when looking at the deployment stage. It is more than just a nice way in the pipeline code to indicate what environment that stage is for.

In Azure DevOps under the Pipelines menu item in the navigation there is a section named Environments. After clicking on this, you will see that there are already some environments listed. These were automatically created when the environment property was added to the pipeline script.

This is a nice, quick way to determine what version of the application is deployed to each environment and what pipeline run it is related to.


Another benefit of defining environments is the ability to set approval gates. When in a specific environment click on the three-dot menu in the top right and select ‘Approvals and checks’.

There are multiple types of checks that can be set before an environment (some will be familiar to those of you familiar with approval gates in the classic Release UI). We are only going to be adding an approval for this pipeline so go ahead and select ‘Approvals’. On this form you can add specific users and/or groups to the list of Approvers. Fill out the approvers and click ‘Create’.

Head back to the pipeline and select ‘Run pipeline’ in the top right. Leave the default options, select ‘Run’ and let the pipeline run. Once Staging completes, you should now see Production marked as ‘Waiting’ and the person you set as an approver should have received and email. Logging in as the Approver there will be a Review button above the pipeline flow.

Clicking into Review, the Approver can ‘Approve’ or ‘Reject’ the deployment and add an optional comment.

Once approved the Production will run as normal. Final congratulations! You now have a full pipeline in YAML with multiple environments and approvers.

Next Steps

This should get you started on creating YAML pipelines in Azure DevOps. The next post will be some additional tricks and tips to help streamline creating your pipeline. There are many ways to customize these pipelines so don’t be surprised in seeing various posts come up that extend what was started here.

If you would like your application started or switched to using Azure DevOps Pipelines, contact us and let’s see how we can help!



  1. Tom | Jul 21, 2020

    Thank you!! This is the best tutorial on multi-stage pipelines I have read! It really solidified my understanding much better than the MS docs. When is Part 3 arriving?

    1. Susan Bell | Jul 22, 2020

      Glad it helped your understanding, Tom! It has always helped me to check out multiple sources on a topic to get a good understanding – everyone has a different viewpoint and way they present the information.

      I’m a big fan of keeping everything in version control so it was exciting to start working with multi-stage pipelines.

      Part 3 should be out next month. You can check back on the blog or follow us on Twitter (@mercuryworks) for the announcement.

    2. Susan Bell | Sep 15, 2020

      Hey Tom,
      I wanted to let you know that the third part of the series is now up! I hope you enjoy it.

      1. Tom | Oct 22, 2020

        Thank you so much!!

Leave a Comment

Your email address will not be published. Required fields are marked *