Categories
blog

Uploading unit test results from Azure devops

Following on from our previous blog post on uploading unit test results, we recently had a client who wished to do this from their Azure devops pipeline.

The main intent here was to be able to have a single portal containing all of their testing evidence which they could share with their clients without the clients needing access to the various dev ops pipelines

Initial State

They were running their tests using the following:

- task: DotNetCoreCLI@2
  displayName: "Run Tests"
  inputs:
    command: 'test'
    projects: '**/*Tests.csproj'
    arguments: '--no-build --configuration:Release'

It’s worth noting that the above task will automatically generate the trx files and store them in the $(Agent.TempDirectory) (documentation) and as such, there are no changes required to the test running in order to be able to generate the trx files.

In the above situation, each project will be run independently and as such, each test project will output its own trx file. We will update the trx uploader tool to handle multiple source trx files, but in the meantime, to upload these results to Conical, each TRX gets treated as its own test run set and then we form an evidence set from these test run sets which is presented to the end client.

Additional Steps

To do the uploading, we created:

  1. A new pipeline variable to contain the PAT for uploading data to Conical
  2. An additional task in the job
  3. An additional powershell script to the work

We decided to call the new variable ‘CONICAL_UPLOAD_TOKEN’. We declared it as needing to be kept secret and as such, it needed to be exposed explicitly to the task, i.e.

- task: PowerShell@2
  displayName: "Publish results to Conical"
  inputs:
    filePath: '$(Build.SourcesDirectory)/uploadTestResultsToConical.ps1'
  env:
    CONICAL_UPLOAD_TOKEN: $(CONICAL_UPLOAD_TOKEN)

The upload script was then as follows:

$sourceDirectory = "$env:Agent_TempDirectory"
echo "Source directory: $sourceDirectory"

$matchingFiles = Get-ChildItem -Path "$sourceDirectory" -filter *.trx

# Ensure we have the uploading tool
dotnet tool update BorsukSoftware.Conical.Tools.TRXUploader

if( $LASTEXITCODE -ne 0 ) {
  dotnet new tool-manifest
  dotnet tool install BorsukSoftware.Conical.Tools.TRXUploader
}

foreach ($file in $matchingFiles)
{
  echo "Dealing with $file"

  dotnet tool run BorsukSoftware.Conical.Tools.TRXUploader `
    -server https://conical.yourcompany.com `
    -product productName `
    -source "$sourceDirectory\$file" `
    -token "${env:CONICAL_UPLOAD_TOKEN}" `
    -testRunType "Unit Test" `
    -tag "BuildID-$env:Build_BuildId" `
    -tag "devops" `
    -tag "SourceVersion-$env:Build_SourceVersion"
}

# Ensure we have the ES creation tool
dotnet tool update BorsukSoftware.Conical.Tools.EvidenceSetCreator

if( $LASTEXITCODE -ne 0 ) {
  dotnet new tool-manifest
  dotnet tool install BorsukSoftware.Conical.Tools.EvidenceSetCreator
}

echo "Creating evidence set"
dotnet tool run BorsukSoftware.Conical.Tools.EvidenceSetCreator `
  -server https://conical.yourcompany.com `
  -token "${env:CONICAL_UPLOAD_TOKEN}" `
  -product productName `
  -searchcriteriacount 1 `
  -searchcriteria 0 product "productName" `
  -searchcriteria 0 tag "BuildID-$env:Build_BuildId" `
  -tag "BuildID-$env:Build_BuildId" `
  -tag "devops" `
  -name "Unit Tests" `
  -description "Combined view of all unit tests" `
  -link "Devops Pipeline" "https://xxx.visualstudio.com/xxx/_build/results?buildId=$env:Build_BuildId&view=results" "Link to the pipeline" `
  -link "Source Code" "https://xxx.visualstudio.com/xxx/_git/xxx?version=GB$env:Build_SourceBranchName" "Source Branch"

Note that this code is silently tolerant of upload failures, this is more by oversight rather than explicitly desired. If we wanted to be strict about upload failures, then we would need to check $LASTEXITCODE after each call to the uploader in a similar way as to how we’re doing so for the tool installation.

Summary

After these changes, the client was able to present the results of this portion of their testing in a nice, easy to consume fashion.

The next step for us is to finish the TSR and ES comparison functionality so that the client will be able to see the differences in test populations between multiple runs. This’ll allow their end client to see how the test universe has changed (hopefully in an expansionary way) between 2 different releases.

As usual, if you have any questions, queries or comments about this or any other aspect of automating your testing, then please do get in touch.

Happy testing!