Uploading unit test results from Azure devops

Following on from our previous blog post on uploading unit test results, we recently had a client who wished to do this from their Azure devops pipeline.

The main intent here was to be able to have a single portal containing all of their testing evidence which they could share with their clients without the clients needing access to the various dev ops pipelines

Initial State

They were running their tests using the following:

- task: DotNetCoreCLI@2
  displayName: "Run Tests"
    command: 'test'
    projects: '**/*Tests.csproj'
    arguments: '--no-build --configuration:Release'

It’s worth noting that the above task will automatically generate the trx files and store them in the $(Agent.TempDirectory) (documentation) and as such, there are no changes required to the test running in order to be able to generate the trx files.

In the above situation, each project will be run independently and as such, each test project will output its own trx file. We will update the trx uploader tool to handle multiple source trx files, but in the meantime, to upload these results to Conical, each TRX gets treated as its own test run set and then we form an evidence set from these test run sets which is presented to the end client.

Additional Steps

To do the uploading, we created:

  1. A new pipeline variable to contain the PAT for uploading data to Conical
  2. An additional task in the job
  3. An additional powershell script to the work

We decided to call the new variable ‘CONICAL_UPLOAD_TOKEN’. We declared it as needing to be kept secret and as such, it needed to be exposed explicitly to the task, i.e.

- task: PowerShell@2
  displayName: "Publish results to Conical"
    filePath: '$(Build.SourcesDirectory)/uploadTestResultsToConical.ps1'

The upload script was then as follows:

$sourceDirectory = "$env:Agent_TempDirectory"
echo "Source directory: $sourceDirectory"

$matchingFiles = Get-ChildItem -Path "$sourceDirectory" -filter *.trx

# Ensure we have the uploading tool
dotnet tool update BorsukSoftware.Conical.Tools.TRXUploader

if( $LASTEXITCODE -ne 0 ) {
  dotnet new tool-manifest
  dotnet tool install BorsukSoftware.Conical.Tools.TRXUploader

foreach ($file in $matchingFiles)
  echo "Dealing with $file"

  dotnet tool run BorsukSoftware.Conical.Tools.TRXUploader `
    -server `
    -product productName `
    -source "$sourceDirectory\$file" `
    -token "${env:CONICAL_UPLOAD_TOKEN}" `
    -testRunType "Unit Test" `
    -tag "BuildID-$env:Build_BuildId" `
    -tag "devops" `
    -tag "SourceVersion-$env:Build_SourceVersion"

# Ensure we have the ES creation tool
dotnet tool update BorsukSoftware.Conical.Tools.EvidenceSetCreator

if( $LASTEXITCODE -ne 0 ) {
  dotnet new tool-manifest
  dotnet tool install BorsukSoftware.Conical.Tools.EvidenceSetCreator

echo "Creating evidence set"
dotnet tool run BorsukSoftware.Conical.Tools.EvidenceSetCreator `
  -server `
  -token "${env:CONICAL_UPLOAD_TOKEN}" `
  -product productName `
  -searchcriteriacount 1 `
  -searchcriteria 0 product "productName" `
  -searchcriteria 0 tag "BuildID-$env:Build_BuildId" `
  -tag "BuildID-$env:Build_BuildId" `
  -tag "devops" `
  -name "Unit Tests" `
  -description "Combined view of all unit tests" `
  -link "Devops Pipeline" "$env:Build_BuildId&view=results" "Link to the pipeline" `
  -link "Source Code" "$env:Build_SourceBranchName" "Source Branch"

Note that this code is silently tolerant of upload failures, this is more by oversight rather than explicitly desired. If we wanted to be strict about upload failures, then we would need to check $LASTEXITCODE after each call to the uploader in a similar way as to how we’re doing so for the tool installation.


After these changes, the client was able to present the results of this portion of their testing in a nice, easy to consume fashion.

The next step for us is to finish the TSR and ES comparison functionality so that the client will be able to see the differences in test populations between multiple runs. This’ll allow their end client to see how the test universe has changed (hopefully in an expansionary way) between 2 different releases.

As usual, if you have any questions, queries or comments about this or any other aspect of automating your testing, then please do get in touch.

Happy testing!


New version released

We’re pleased to announce a new version of Conical has been released.

This version has a few new features, the main ones being:

  • The internal storage of ‘creator’ for test run sets has been updated so that it reflects the current name of the user which uploaded the data rather than the name of the uploading user at the time that they uploaded data
  • Usernames can now contain ‘.’

Note this is the 1st version of Conical for which automated Selenium tests have been used as part of the release testing process. We will provide more details on how we use Selenium in future blog posts / updates. As part of this testing, we’ve also added a range of additional features to the book of work which’ll help other users to use Conical to help with their UI testing.

Happy testing


New version released

We’re pleased to announce a new version of Conical has just been released.

Along with a few minor UI tweaks, the main feature of this release is the ability to specify adhoc XPATH queries when looking at results XML.

This simplifies the analysis of results where the user wishes to perform some quick querying on the data without needing to create a custom XSLT.

Note that this functionality doesn’t replace the more powerful XSLT transformation feature but should be seen as a quick investigation tool with the XSLT feature then being used when the requirements are more well known (remember that the XSLT functionality allows for parameterisation).

To use the feature, simply click on the search icon in the results XML tab and follow the prompts.

As usual, if you have any comments, feedback or suggestions, then please do get in touch.

Happy testing


New version released

We’re pleased to announce that we’ve just released a new version of Conical. The major features are:

  • Ability to search both test run sets and evidence sets by tags through the UI
  • Ability to see the history of a test run
  • Improvements to the UI to stop reloading data

As usual, any comments, questions or feedback, then please do get in touch.

Happy testing


Creating evidence sets from the command line

As part of the recently added support for evidence sets, we released a .net tool which can be used to create evidence sets from the command line without the need to write a single line of devops code.

The tool – BorsukSoftware.Conical.Tools.EvidenceSetCreator– is available on Nuget and the source is available on GitHub.

Basic Idea

The premise behind the tool is that there’s a programmatic way to identify all of the required inputs to an evidence set, even if this needs to be broken down into multiple criteria.

Following from that principle, the typical workflow is that, for a pipeline which creates the source data, each of the uploaded test run sets is tagged with a unique identifier. Internally, we use ‘ci-%buildNumber%’ but you’re obviously free to come up with something which works for you.

Once you have this identifier defined, it’s trivial to create an additional step in your CI pipeline which runs the tool to create the evidence set. For our internal usage, we have something akin to the following:

dotnet tool run BorsukSoftware.Conical.Tools.EvidenceSetCreator 
  -token "thisIsntOurActualToken"
  -product "dogfood-deployment"
  -searchcriteriacount 2
  -searchcriteria 0 product "dogfood-deployment"
  -searchcriteria 0 tag ci-%build.number%
  -searchcriteria 0 tag api
  -searchcriteria 0 prefix api
  -searchcriteria 1 tag ci-%build.number%
  -searchcriteria 1 tag deployment
  -searchcriteria 1 product "dogfood-deployment"
  -searchcriteria 1 prefix deployment
  -tag ci-%build.number%
  -name "Integration tests"
  -description "Combined view"
  -link "Team City" "" "CI job"

This has the following meaning:

  1. There are 2 different sets of test run sets which contribute to our evidence set
    1. Criteria #0 searches for everything in dogfood-deployment which is tagged with both ‘ci-%build.number%’ and ‘api’. These results will have a prefix of ‘api’ (i.e. a test called ‘group1\group2\testName’ would expand to ‘api\group1\group2\testName’ in the evidence set)
    2. Criteria #1 searches for everything in dogfood-deployment which is tagged with both ‘ci-%build.number%’ and ‘deployment’. These tests are then prefixed with ‘deployment’
  2. The generated evidence set will:
    1. be tagged with ‘ci-%build.number%’
    2. have an additional link attached – the URL expands to the actual Teamcity job which generated all of the source data
Getting started

If it’s been a while since you last used .net tools, then full information from Microsoft can be found here.

The very quick ‘getting started’ steps to follow to get you ready are:

# Create the new manifest
dotnet new tool-manifest

# Install the tool
dotnet tool install BorsukSoftware.Conical.Tools.EvidenceSetCreator

Note that if you’re running the above in a CI process, and your tooling reuses the same workspace etc., then the install process won’t necessarily always ensure that the latest version of the tool is available (this caught us out ourselves a few times when we were adding a new feature to the tool!). To handle this, you can also run:

dotnet tool update BorsukSoftware.Conical.Tools.EvidenceSetCreator

As usual, if you have any problems, suggestions or queries about any of this, then please don’t hesitate to get in touch through any of the usual routes.

Happy testing.


Uploading unit test results

Although Conical has never been intended to replace existing unit test / CI workflow tools, it’s a fairly common for teams to have a series of what are actually integration or regression tests structured as unit tests (if only because it’s rather easy to do).

Obviously, in these circumstances, we would tend to advocate for having a more appropriate, specialised piece of software to handle the different requirements of these. However, we acknowledge that in a lot of circumstances, this might be overkill and as what we’re aiming to do is help you improve your testing at a reasonable cost rather than aim for a prohibitively expensive and unrealistic testing perfection, we’re pragmatic in how we can help people’s existing processes.

To that end, we’ve release a new tool to Nuget BorsukSoftware.Conical.Tools.TRXUploader (Source – GitHub). Full instructions on how to use the tool are provided on the GitHub page.

With this approach, it’s possible to use your existing testing processes etc. but report your results in a nicer, more accessible fashion and then to subsequently improve the generation process if this would be beneficial to your product.

Generating TRX files

To ‘refresh your memory’, it’s very easy to generate a trx file from the command line. Navigate to the directory containing your tests’ project file and run:

dotnet test -r ../testOutput --logger "trx;logfilename=output.trx"

This will generate a trx file in the output directory.

Installing the upload tool

The tool is packaged as a .net tool so you can follow the instructions on MSDN. In short:

  1. Create a tool manifest
  2. Install or update the tool
  3. Run the tool

Note that we would always recommend updating the tool as well in order to pick up the latest version of the tool.

These instructions expand into:

# Create manifest
dotnet new tool-manifest

# Install tool
dotnet tool install BorsukSoftware.Conical.Tools.TRXUploader

# Update tool
dotnet tool update BorsukSoftware.Conical.Tools.TRXUploader

# Run tool
dotnet tool run BorsukSoftware.Conical.Tools.TRXUploader \
  -server \
  -product "myProduct" \
  -source "output.trx" \
  -token "noThisIsntOurToken" \
  -tag "local" \
  -tag "example" \
  -testRunType "Unit Test"
Viewing the results

When the results are uploaded to the Conical instance, they will be mapped as one unit test run to one Conical test run, with the tests being subsequently displayed grouped by name (. are treated as hierarchy separators).

The details from the trx file (e.g. the machine details, timings etc.) are uploaded as results XML with any logging output being stored as logs.

Future steps

If you have any suggestions as to how to improve the tool / make it easier to handle your use-case etc. then do get in touch, either with the contact us below or via GitHub.

Happy testing.


Evidence Sets Released

We’re pleased to announce that we’ve released a new version of Conical containing support for evidence sets. These allow users to be able to have a high level view of the state of their entire release candidate across multiple test run sets and products.

It’s taken a little bit more time than we had originally planned to “dot the ‘i’s and cross the ‘t’s”, but it’s definitely worth the wait. We updated the original implementation to remove the ability to mark tests as ‘pass after review’ (PAR) as the feedback we received was that having an immutable overview was rather useful in its own right. The PAR functionality will be coming soon within the general release approval functionality.

To make it easier to create evidence sets from the command line / CI pipeline, we’ve released a tool on nuget – link – to make it trivially easy to do so without needing to write any code.

We use this tool ourselves in our CI processes prior to release to create an evidence set representing all of the test material that is run against our final candidate docker image, i.e. we can see the whole results of all testing for that package in a single place so that we can be confident that what we’re releasing works.

To get started, simply download Conical and follow the installation instructions. And as always, if you have any requests / comments, please do get in touch with us and we’ll do our very best to help.

Happy testing


Introducing Evidence Sets

[Updated to reflect change in feature scope following user feedback]

We’re pleased to announce a new feature that we’re working on – Evidence Sets. The premise here is that this allows a user to group a set of test run sets together to form a single viewable unit which can be used to provide evidence (hence the name) of testing.

Evidence sets can be used in multiple different ways, including:

  • Allowing for failing tests to be re-run if desired without having to re-run everything.
  • Grouping multiple pieces of testing together to have a single reference for test results and for end user sign-off.

Main features

The main features of evidence sets are:

  • Ability to collate multiple test run sets together, including:
    • optional prefixes to create custom hierarchies
    • subsets of test runs as desired
    • from multiple different products
  • Ability to have multiple test runs contribute to a single test (e.g. to handle re-runs). There are several options (best result, worst result, first result, last result or not allowed) for deciding the state of a test if multiple contributing test runs are specified


Internally, we use the evidence set functionality to allow us to have a coherent overview of the state of the application prior to release. For each release, we want to run:

  • integration tests for the API layer
    • For a fresh install
    • For each DB upgrade path
  • integration tests for the DB update functionality
  • integration tests for the fresh install functionality

Additionally, we would like to be able to show the results of the UI testing

Example – API Integration Tests

The API integration tests are designed to check that a given instance of the API performs as expected and will cover everything from uploading results to checking the security model works as expected. Given that we want to ensure that the functionality is correct regardless of whether it’s a fresh install or an upgraded install, we want to run the same set of integration tests against as many combinations as possible. As the running of these tests is highly automated (one just needs to specify the target server and the appropriate admin user to use to start) then these are trivially easy to run and can generate a large number of result sets to analyse.

By using the evidence sets functionality, we can collate all of these result sets into a single display unit so that it’s very easy to get an overview of the state of the release candidate. We do this by using the ‘prefix’ functionality so it’s very clear where there’d be a problem, e.g.

  • api
    • clean
    • upgrades
      • v1
      • v2
      • v3
      • etc.

And then the usual test hierarchy applies underneath each node.

Note that as we wouldn’t release anything which is non-green, we don’t need to leverage the sign-off functionality in evidence sets.

In addition to the functionality above, we then add the installer / upgrade test results to the same evidence set (under appropriate prefixes) so we can demonstrate to the people signing off the release that everything is good.


We’re putting the final touches to the functionality and we’re hoping to have this work complete in the next week or so and then we’ll make it available to all of our clients in the usual fashion.

In the meantime, if you have any questions, queries or suggestions then please do get in touch with us


New version released

We’re pleased to announce that a new version of Conical has been released with a few minor bug fixes as well as the ability to see more information about the hosting environment.

As usual, to get started go to our docker page.


New version released

We’re pleased to announce that we’ve uploaded a new version of Conical to Docker.

This version contains a few minor fixes as well as a small update to the underlying DB schema.

The schema change will be applied by the tool automatically after the container starts up and the super user code is installed (see your container logs for this code).

To get started, go to our Docker page and follow the instructions.