Skip to content

Enhance Pipeline

Ok! We have written a functional build pipeline for your app that supports continuous integration. Now it's time to utilize ADO Pipelines and YML features to enhance our pipeline to match standard implementation.

Task labels

As you see in the pipeline runs, ADO provides default names for tasks under the job. This comes from the name of the task. Notice how three of the four tasks in the pipeline are the same - DotNetCoreCLI. We want to provide specific names to better identify the actions in each task.

Add a displayName key under each task and provide an appropriate value for the specific task (ex: "Build App", "Unit Tests", "Package Artifact", "Publish Artifact").

- task: DotNetCoreCLI@2
  displayName: 'Build App'

Variables

In your pipeline file, notice how $(Build.ArtifactStagingDirectory) was used in the .NET publish task and the publish artifact task. This is how YAML calls variables. We never defined this variable, but rather referred to an existing agent variable. ADO Pipelines also allows you to add your own variables as well as library of variables.

Let's utilize this feature and add variables to our build configuration. Instead of hard-coding Release as the value for the configuration flag, let's parameterize this during the build and test task only (we will always want to use Release when generating a deployable artifact, so we will not parameterize that task.)

User-defined variables can be overridden at run-time when running pipelines within ADO, otherwise the default value will be used.

Add a new variable called BuildConfiguration and give it a default value of Release. Users could override this value at runtime with a Debug configuration when solving parity from local to pipeline builds.

Add Variable

Now that we have created and defined a variable, replace the hard-coded value of Release configuration in the build and test tasks with the variable $(BuildConfiguration) that you just created.

Branch conditions

The pipeline is currently producing artifacts on every build, including branches and PR builds. Best practices for continuous integration say that all builds on main are release candidates. Therefore, we only need to generate and publish artifacts on the main branch. To reduce overhead, we will add some conditional statements to perform the package and publish tasks only on the main branch after successful steps.

Within the package and publish tasks, add a condition value to evaluate a boolean expression and place it under the task and above the input.

Our condition parameter will use the eq boolean expression to compare the current branch name against the main branch name, resulting in the following condition statement:

condition: eq(variables['Build.SourceBranch'], 'refs/heads/main')

If this boolean expression evaluates to true, then the task will run.

Build agents

We are using Microsoft's hosted agents to run our pipelines. These pre-defined agent pools are a simple out-of-the-box solution to quickly get started on a pipeline. However, it does not come without its limitations. Apart from missing software that requires additional overhead and configuration, these cannot run parallel pipeline executions in the Dojo Sandbox project.

The Dojo team created an Azure VMSS (virtual machine scale set), which are available within an agent pool in the Dojo Sandbox. These agents are defined within security standards and scalable for parallel pipeline runs.

Instead of using the hosted agent, we will take advantage of the available VMSS. To see the available agents, go to the Project settings and select Agent pools. Select the available Dojo VMSS and use the name of the pool for the pool block in your pipeline file.

pool: name-of-agent-pool

Instead of calling a specific agent as before (VM image), we are calling a pool and then using any available agent within that pool.

Variable groups

In some cases, we want to use variables more widely accessible across ADO Pipelines. This is helpful when a team has multiple pipelines and want a single location to define and maintain global variables. For example, teams may want to manage the name of their build agent across all pipelines.

Let's define a couple variables in a new variable group, and then reference them within the pipeline.

Create a variable group

Under Pipelines, select Library. If you or your team already has a variable group, use that existing group, otherwise create a new group. Similar to Artifact Feeds, you should have one feed per team. (Note: by clicking into a different area in ADO, you will need to Save your existing pipeline to not lose changes)

Add variables and definitions

Create two variables: one for the name of the agent pool and one for the name of application's artifact. Define these variables with the values that have previously used in the pipeline.

Parameterize your pipeline with a variable group

In order to reference the variables in a variable group, we need to add a variable group block, and refer to the name of the group we just created.

variables:
  - group: VG-dojo-coaches

Now we can parameterize the variables for the agent pool and the application artifact.