Categories
blog

Improving testability of complex DBs

One of our clients was developing a new product. It was already live for multiple clients and their clients were happy. Each client had their own DB instance and as the product was in development using a very “organic” approach, this lead to the situation where every client not only had their own instance of the DB (perfectly fine and reasonable), but also their own schema which made adding each feature or new client progressively more and more difficult and time consuming.

At a high level, their architecture was:

  • A SQL server instance
  • A python REST API
  • A series of Azure ADF workflows to import data from multiple different sources.

Once the data had been ingested and transformed into the standard model, it was then further exposed via a large number of complex views which contained a lot of business logic, especially around edge cases. These edge cases didn’t affect every client which added to the complexity of standardisation.

They came to us to help them deliver faster and more reliably.

To do this, it was necessary to standardise the DB schema whilst at the same time continuing to roll out the product to new customers (each with their own desired customisations). In order to achieve this, we took the following steps:

  1. A process change – all DB changes in prod now had to be done in code with an update script (no need for downgrade scripts), this stopped us from getting any worse.
  2. Introducing comparative tests as the API level (i.e. 1 API-DB pairing vs. a 2nd). This allowed us to see the impact of our changes on the DB.
  3. Refactoring the DB to move the case-when pairing to custom DB functions so that they could both be reused consistently and be tested
  4. Introducing integration tests as the API / DB level

The comparative tests were fairly standard and weren’t particularly special so we don’t go into much depth about them here, but briefly, working on the assumption that what’s already live is correct, then we’re generally interested in the question ‘what has changed?’.

The integration tests covered any custom DB functions and any views where we wanted to ensure that the business logic in the underlying DB was correct. Note that these view level integration tests were reserved for the base level views which fed into everything else; this was a matter of pragmatism (limited time / budgets) as well as need – with the less complex views, we were already fairly confident that they were doing the right thing and as such, comparative tests could be used.

To get to where we wanted to get to, it was necessary to take a series of steps. These steps were typically:

  1. Pick the view to be processed
  2. Work out which client’s view was the ‘most correct’
  3. For this view:
    • Create a comparative test
    • Rewrite the view as necessary, using DB functions to replace the large number of repeated case-when blocks
    • Create integration tests if appropriate
  4. Create an update script
  5. Per client:
    • Create a clone of the DB instance
    • apply update script
    • launch the API pointing to the new DB
    • run comparative tests

If the comparative tests passed, then everything was good and the code could be committed. However, if there were differences, we then had to analyse the differences to work out which was correct. Then, depending on the answer, decisions could be taken:

  • The new version was correct => all good, let the client know that their numbers were changing and why
  • The new version didn’t handle some of the client specific edge cases => update the view / custom functions so that it did handle them and then repeat the above
  • It was more complicated => park it, standardise what we could and then come back to the problem when it would be much smaller and clearer as to the right thing to do.

Note that this was complicated by the fact that the reasons behind each of the client level customisations had been lost to history and the original developer had moved on. So a large portion of the work was trying to understand the thought process behind the change rather than blindly reimplementing.

Once we had this, we then had a new baseline for what was considered good. Developers could then run the tests locally (pointing to a local instance and a prod [clone] for comparison). This meant that they were able to see the impact of their change before submitting it.

The CI process was also updated so that it ran the newly created tests once per client on each check-in. The per client process was:

  1. Clone the current production DB
  2. Apply any DB update scripts
  3. Launch a copy of the python API pointing to the new DB
  4. Run the actual tests (each test was given access to a DB connection string and the python API / security token)
  5. Push the results to their Conical instance

We then subsequently grouped the results together in a Conical evidence set so that we could see a high level overview of the impact of the change and subsequently decide what to do. This was very helpful as it allowed us to catch cases where we had missed client level customisations or, more frequently, where we discovered customisations which made zero sense).

Note that where we still needed customisations, this was still possible (update scripts can have IFs in them after all), but this was minimised as much as possible.

With this infrastructure in place, it was then possible to very rapidly make DB level changes and be confident that we hadn’t broken lots of other parts of the system.

As usual, any questions, please ask.

Happy Testing

Categories
blog

Testing collections with non-unique “unique” keys

We recently had a client who had an API which was live and being actively used and they wanted to improve both it and the testing of it. The API contained a series of end points which the client thought provided a set of rows / objects, where there was a single row per set of unique keys (‘Contract ID’ and ‘Month’). However, the reality was that the API was returning multiple rows per expected set of unique keys.

We were expecting the ‘Contract ID’-‘Month’ tuple to be unique, however the API had other ideas:

Expected:

ContractID: 1
Month: 2025-06
Payment: 375

ContractID: 1
Month: 2025-07
Payment: 375

ContractID: 2
Month: 2025-06
Payment: 57

Actual:

ContractID: 1
Month: 2025-06
Payment: 375

ContractID: 1
Month: 2025-07
Payment: 375

ContractID: 2
Month: 2025-06
Payment: 35

ContractID: 2
Month: 2025-06
Payment: 22

Obviously the long term desired outcome was to update the API such that it behaved as expected, however, their front end had been coded in such a way to tolerate this duplication and they had more pressing needs for their product than being architecturally pure.

We wanted to put in a series of comparative tests. These are where we compare the output of 2 versions of the API (differing in either software or configuration or anything) and compare their outputs. Unlike classic integration tests, these are less intended to be a pure pass-fail, but to let us know what the impact of releasing the new version will be.

One option here would have been to ignore this whole end point during the testing process until the API behaved as expected. This was swiftly ruled out as it relied on some of the most complex logic in the platform (heavily SQL based so unit tests were somewhat scarser) and we were rewriting it for them.

This left us with a few options:

  1. Do a summation in the test code – i.e. grouping all of the rows together and then testing the resulting summed rows.
  2. Do the usual collections comparison functionality where we could and then compare the “non-unique unique” row sets.

Option #1 was ruled out as there was no guarantee that the summation would be correct, especially when it came to if future properties were added to the returned data model running the risk of false negatives. Note that because we use a code generation tool to generate the code level data model, the dynamically generated data models get updated fairly regularly anyway, so it’s unlikely that it gets out of sync with the actual API being tested.

This left option #2. For this, we used the standard BorsukSoftware.Testing.Comparison.Extensions.Collections (nuget) functionality. The return type here contains:

  • Matching keys
  • Additional keys
  • Missing Keys
  • Non-matching keys
  • Incomparable keys

For the incomparable keys, we get a set of:

  • the keys which were expected to be unique, but weren’t (e.g. in this example date and contract number)
  • the expected rows which matched these keys
  • the actual rows which matched these keys

From here, we then needed to come up with a way to compare these collections. Because we weren’t interested in the returned order, the simplest thing to do here was to:

  1. pick an ordering method (payment in our case)
  2. flatten down the rows using the array plugin
  3. compare these flattened values

The upside of this approach:

  1. We were aware of the impact of our changes’ impact on this very important end point
  2. We didn’t have a permanent false positive in our tests. These cause developers to simply ignore the given test and therefore they’d miss if there was an actual unexpected change in this space.
  3. It was quick to deliver

The downside of this approach:

  1. It’s a sticky plaster, we still didn’t have a pure API
  2. When the API is fixed so that the number of rows returned dropped, we’ll see very noisy results for that test run. Note that when the API was fixed, the test could also be updated during the development process to do the summation thus proving that the totals hadn’t changed. After that confirmation, the test code be updated (in a subsequent PR most likely) to remove the summation code for the reasons mentioned above so that everything was keen.

The upsides outweighed the downsides and the long term fix was added to the backlog.

We did this via a helper function:

        public static (IReadOnlyCollection<IReadOnlyDictionary<string, object>> matching, IReadOnlyCollection<(IReadOnlyDictionary<string, object> Keys, IReadOnlyList<KeyValuePair<string, BorsukSoftware.Testing.Comparison.ComparisonResults>> Differences)> multipleRowSetsDifferences)
            CompareIncomparableItems<T>(
                IComparativeTestContext context,
                BorsukSoftware.ObjectFlattener.ObjectFlattener objectFlattener,
                BorsukSoftware.Testing.Comparison.ObjectComparer objectComparer,
                BorsukSoftware.Testing.Comparison.Extensions.Collections.ObjectSetComparerStandard.ComparisonResults<T> comparisonResults,
                Func<IEnumerable<T>, IOrderedEnumerable<T>> sortingFunc)
        {
            var multipleRowSetsMatching = new List<IReadOnlyDictionary<string, object>>();
            var multipleRowSetsDifferences = new List<(IReadOnlyDictionary<string, object> Keys, IReadOnlyList<KeyValuePair<string, BorsukSoftware.Testing.Comparison.ComparisonResults>> Differences)>();
            if (comparisonResults.IncomparableKeys.Count > 0)
            {
                context.LogMessage("");
                context.LogMessage(" => Comparing non-unique collections by index");

                foreach (var grouping in comparisonResults.IncomparableKeys)
                {
                    var expectedRows = grouping.Value.ExpectedObjects ?? Array.Empty<T>();
                    var actualRows = grouping.Value.ActualObjects ?? Array.Empty<T>();

                    var differences = objectComparer.CompareValues(
                        objectFlattener.FlattenObject(null, sortingFunc(expectedRows)),
                        objectFlattener.FlattenObject(null, sortingFunc(actualRows))).
                        ToList();

                    if (differences.Count == 0)
                        multipleRowSetsMatching.Add(grouping.Key);
                    else
                        multipleRowSetsDifferences.Add((grouping.Key, differences));
                }

                context.LogMessage("Summary:");
                context.LogMessage($" matching - {multipleRowSetsMatching.Count}");
                context.LogMessage($" differences - {multipleRowSetsDifferences.Count}");
            }

            return (multipleRowSetsMatching, multipleRowSetsDifferences);
        }

The client was happy and the devs were happy as they could see the impact of the fairly chunky changes that they were making.

As usual, any questions, please ask.

Happy Testing!

Categories
blog

New version released

We’re pleased to announce the release of a new version Conical. The main feature of this release is support for the manual testing functionality. This has been dogfooded internally and with a few beta testers for a while now and we think that it’s ready for wider use. This doesn’t mean that we don’t think that testing should be mostly automated but reflects our pragmatic belief that some testing (and evidence thereof) is vastly better than performing no testing unless perfection can be achieved.

Additionally, we’ve spent a lot of time and effort improving the overall UX for users. As part of this, we’ve tried to style all of the pages consistently and in a much more visually appealing fashion as well as going through all of the expected use-cases and ensuring that keyboard shortcuts (especially form submission) works as a user would expect. If you come across an area where the UX isn’t as expected, then please do get in touch and we’ll do our best to rectify.

We will add some more blog posts / FAQ pages on how the manual testing functionality can best be used shortly. In the meantime, you can download the new version and experiment.

Warnings

As part of this build, we’ve upgraded to .net 8. The DB driver for .net 8 can, depending on the SQL instance being connected to, complain about an untrusted certificate. If this is the case, then the connection string file will need to be updated to append the following:

;TrustServerCertificate=true

We’ve updated the 1st time installer to allow users to configure this flag, however, for existing installations, we do not automatically update the connection string. This means that if this flag is required, then the file must be manually updated. This can be done by editing the config file on the container instance (this is usually mounted as a volume).

If you need any assistance with this, then please get in touch and we can assist you.

New features

Alongside the manual testing functionality, lots of other functionality has been added, including:

Improvements / Bug fixes
  • Improved disc-space monitoring service
  • Improve UX for changing passwords
  • Fix bug when writing data using product aliases
  • Improved UX for admin users when inputting data
  • Ensure all buttons etc. are styled appropriately
  • Ensure all form like pieces of functionality behave like forms
  • Ensure all dialogs force keyboard focus to the appropriate controls on display
  • Improve log-in experience so users can login directly from any page
  • Ensure product cover images can be easily updated in the UI
  • Allow test run types to be renamed
  • Improvements to lightbox control so non-image additional files can be viewed without needing to download them
  • When logging in, user passwords are now passed via the body rather than the query string for better log security. Note that the old method will still work for backwards compatibility, but is no longer recommended.
Non-functional changes
  • Standardise tree component across all use-cases
  • Additional unit tests
  • Additional integration tests
  • Refactoring following static analysis of the codebase
  • Update internals to use Task<..> everywhere
  • Simplification of CSS to avoid duplication
  • Refactored admin pages to facilitate better testing
  • Updated to use .net 8

Summary

We’re really proud of the new version and hope that you will find the new features useful.

As usual, if you have any questions or queries about anything related to Conical or testing, then please do get in touch – contactus@conical.cloud

Happy testing.

Categories
blog

Status update

Just a little update from us. We’ve been very quiet on the releases front recently as we’ve been focussing on helping our clients improve their build test release processes.

As part of this, we’ve been developing functionality to allow users to store the results of manual testing in the tool. With this functionality, users can upload screenshots and text (and anything else they can think of) and have a record of what testing has been performed to make decision makers more comfortable with sign-offs. This is being “dog-fooded” as part of us helping our clients and we’re hopeful that it’ll be ready for release shortly.

Note that this functionality doesn’t negate our very strong belief in automated testing, we think that it can be a very helpful intermediate step on the journey towards automated testing as well as for final ‘tyre kicking’ types of tests where some manual last minute validation can be useful.

If you have any thoughts, questions, feature requests etc. about this or any other topic, then please do get in touch and we’ll see how we can help improve your confidence in your product’s releases.

Categories
blog

Monitoring Conical with Prometheus and Grafana

One of our clients recently extended their use of Conical to include UX testing. This proved very useful to them, but also entailed the uploading of ~300MB of data per test cycle. As they’d originally only set themselves up with 16GB of disc space for data, this proved rather problematic as their test jobs started failing because they were unable to upload data as they’d filled their available space.

On talking to the client, we asked what monitoring tools that they had in place and the answer was that they didn’t and they wanted us to recommend a solution. Given that we support reporting server metrics in the Prometheus format (via /api/metrics) the natural answer was ‘use Prometheus and Grafana’.

This allowed the client to be aware of when they needed to proactively clean up their old, no longer relevant, test results rather than having to respond reactively when everything had stopped working.

It then struck us that there was no example of how to configure Prometheus for use with Conical on the web, hence this blog post.

Background

Prometheus and Grafana work together to provide a way of monitoring metrics from a range of services. Prometheus is responsible for gathering the data from a range of sources whilst Grafana can be used to display dashboards of said data.

These tools are incredibly powerful and have a vast range of additional functionality over and above our simple use-case. However, for the purposes of what we’re trying to do, the above is valid.

Note that we run all of our services using Docker containers.

To that end, we needed to do the following:

Conical:

  • Create a PAT with ‘servermetrics’ permissions

Prometheus:

  • Create a configuration file (see below) containing information about where we want to source data from
  • [Optional] Create a ‘webconfig.yml’ file to enable username / password access
  • Create a Docker instance
  • [Optional] Put behind a reverse proxy (we use nginx)

Grafana:

  • Create a Docker instance
  • [Optional] Put behind a reverse proxy (we use nginx)

Installing / Configuring Prometheus

Our requirements for configuring Prometheus were:

  • fault tolerant
  • can source data from multiple Conical instances
  • Restricted access

To that end, we went for a simple approach of:

  • A mounted volume to contain the prometheus DB
  • A config file
  • A webconfig.yml file containing the required user details

Configuration File (/datadrive/prometheus/config/prometheus.yml)

global:
  scrape_interval:     15s # By default, scrape targets every 15 seconds.
  evaluation_interval: 15s # By default, scrape targets every 15 seconds.

scrape_configs:
  - job_name: 'demo-conical'
    # Override the global default and scrape targets from this job every 30 seconds.
    scrape_interval: 30s
    metrics_path: "/api/metrics"
    scheme: https
    authorization:
      type: Bearer
      credentials: youPATGoesHere
    static_configs:
      - targets: ['demo.conical.cloud']

To monitor additional instances, simply replicate the final block with the additional details.

WebConfig File (/datadrive/prometheus/config/webconfig.yml)

basic_auth_users:
  admin: encryptedPasswordHere

Details of how the basic auth works can be found on the prometheus website along with instructions on how to generate the encrypted password..

Docker Command to launch the container

sudo docker container run \
    -d \
    --name prometheus \
   --network monitoring \
   --hostname prometheus \
   -p 9090:9090 \
   -v /datadrive/prometheus/config:/etc/prometheus \
   -v /datadrive/prometheus/data:/var/prometheus \
   --restart always \
   -u 1002 \
    prom/prometheus:latest \
    --config.file=/etc/prometheus/prometheus.yml \
    --storage.tsdb.path=/var/prometheus \
    --web.config.file=/etc/prometheus/webconfig.yml

Note that we assigned ownership of /datadrive/prometheus to user 1002.

Installing Grafana

Installing the Grafana instance is much simpler and can be done with the following command:

sudo docker run \
  --network monitoring \
  --hostname grafana \
  -d \
  --name grafana \
  -p 3000:3000 \
  -v /datadrive/grafana:/var/lib/grafana \
  -u 1003 \
  --restart always \
  grafana/grafana:latest

Note that /datadrive/grafana is owned by user 1003.

Using Grafana with a reverse proxy (NGINX)

If you’re planning on using a reverse proxy with grafana, then you’ll need to ensure that the forwarding block sets the Host header, e.g.

    location / {
            proxy_set_header Host $http_host;
            proxy_pass http://internalIPAddress:3000;
        }

If this step isn’t done, then Grafana will misbehave subsequently when a user tries to log in.

Configuring Grafana

There are lots of tutorials on the internet as to how to configure data sources in Grafana so we will not seek to duplicate that information here. The main things to remember:

  • The URL of the Prometheus server is from the perspective of the Grafana instance. Because we put both services in the same Docker network and specified host names, then we simply use http://prometheus:9000 and bypass all of the nginx functionality (which is reserved for enabling access from outside of our internal network)
  • If using basic auth, then you’ll need to remember to set the user name / password combination.

Conclusion

With this set up, we’ve been able to monitor both our own demo instance of Conical and also some of our clients’ instances, allowing us to be proactive in fixing problems (usually disc space) before they become a real problem.

We hope that you will find these instructions helpful if you wish to add monitoring to your own instances. If you have any questions about your Conical instances or how we can help you improve your overall testing, then please do get in touch – contactus@conical.cloud.

Happy testing.

Categories
blog

New version released

We’re pleased to announce that a new version of Conical is now available. This contains a few small technical as well as some more UX improvements.

Highlighting of flattened Json

The highlighting functionality from the results text display has been brought into the flattened results Json display. This should assist in interpreting test results.

Better 404 experience

When navigating to incorrect links, the experience is now slightly better.

Data deletion

A bug was discovered whereby data wasn’t always removed from disc when a test run or evidence set was deleted. This has been rectified.

Note that any data which was previously left dangling is unaffected by this change. We are adding a piece of functionality to make it easier to clean this data which will be available in a future version.

As usual, if you have any requests, suggestions or comments then please do get in touch.

Happy testing!

Categories
blog

New version released

We’re pleased to announce that a new version of Conical is now available. This comes with a range of UX improvements to make the tool more powerful. These changes include:

Results text / logs

When viewing these, the user can now apply filters to narrow down the range of rows which are displayed. This can be useful when a user is trying to find specific messages in the output. To further assist in this, users can choose to highlight rows which match their criteria. By using these 2 features, it should be simpler to hunt for the rows of interest.

Additionally, where the source data points are large, users can now use pagination to improve the responsiveness of the browser.

Results json

It’s now possible to flatten the results json and apply filters to the flattened data.

Improved UX for non-logged in users

Previously, when a non-logged in user clicked on a link, they were presented with an error screen. They were required to navigate to the profile page, log in and then click on the link again. To improve that experience, they can now log in directly from the error page.

Filtering

Filtering functionality has been added to audit trails and .net assembly information.

Additional files

Some tidying up has been made here to improve the consistency of experience across all usages of additional files.

As usual, if you have any requests, suggestions or comments then please do get in touch.

Happy testing!

Categories
blog

New version released

Although the blog has been quiet for the last couple of months, our keyboards have been anything but. We’ve been working with our clients to add additional features to simplify their processes and improve their ability to present testing results to their clients. These new features include.

UX improvements

A lot of small improvements have been made to the tool to improve its usability. We’re always keen on user feedback, so if there are any aspects of the tool which you think that we can improve, then please do let us know.

Lightboxes for media

As part of extending the tool to better facilitate UX testing, users can now take advantage of being able to see any additional files using a lightbox.

Product dashboards

Users can now upload multiple dashboards per product (as opposed to previously only being able to configure the front page). This functionality can be thought of as a ‘mini CMS for test results’ allowing users to create customised presentation of the data, typically a dashboard per release or CI pipeline. These dashboards can contain standard HTML alongside Conical specific widgets for accessing test data.

We are currently using them to allow our clients to present an overview of a release candidate’s testing status, thereby allowing their project owners to be able to see the status at a glance.

We currently have support for embedding searches alongside their results. Additional widgets will be added as user requirements become clearer. If you have suggestions or requirements for additional widgets which would be useful, then please do get in touch.

As usual, if you have any questions about Conical or how we can help you improve your build, test and release processes, then please do contact us – contactus@conical.cloud.

Happy testing

Categories
blog

New version released

We’ve been busy this last month helping our clients use Conical to improve their testing processes. We’ve got some cool new features coming out of this work will be announced and released shortly, but in the meantime, we’re pleased to announce a new version of Conical has been released.

This version has a few improvements, including:

  • Improvements to the UX in the admin section
  • improved user searching functionality
  • long tags fail gracefully (BadRequest) rather than a server error
  • server metrics – free space on mounted discs on linux now report the correct value

Additionally, we found a problem with product level aliases when combined with product level privileges. This has now been corrected and fixed.

If you would like us to assist you with your testing and release processes then do get in touch with us at services@conical.cloud. We can help all sizes and types of organisations and we relish a challenge!

As usual, any questions, suggestions or comments, please get in touch.

Happy testing.

Categories
blog

Uploading unit test results from Azure devops

Following on from our previous blog post on uploading unit test results, we recently had a client who wished to do this from their Azure devops pipeline.

The main intent here was to be able to have a single portal containing all of their testing evidence which they could share with their clients without the clients needing access to the various dev ops pipelines

Initial State

They were running their tests using the following:

- task: DotNetCoreCLI@2
  displayName: "Run Tests"
  inputs:
    command: 'test'
    projects: '**/*Tests.csproj'
    arguments: '--no-build --configuration:Release'

It’s worth noting that the above task will automatically generate the trx files and store them in the $(Agent.TempDirectory) (documentation) and as such, there are no changes required to the test running in order to be able to generate the trx files.

In the above situation, each project will be run independently and as such, each test project will output its own trx file. We will update the trx uploader tool to handle multiple source trx files, but in the meantime, to upload these results to Conical, each TRX gets treated as its own test run set and then we form an evidence set from these test run sets which is presented to the end client.

Additional Steps

To do the uploading, we created:

  1. A new pipeline variable to contain the PAT for uploading data to Conical
  2. An additional task in the job
  3. An additional powershell script to the work

We decided to call the new variable ‘CONICAL_UPLOAD_TOKEN’. We declared it as needing to be kept secret and as such, it needed to be exposed explicitly to the task, i.e.

- task: PowerShell@2
  displayName: "Publish results to Conical"
  inputs:
    filePath: '$(Build.SourcesDirectory)/uploadTestResultsToConical.ps1'
  env:
    CONICAL_UPLOAD_TOKEN: $(CONICAL_UPLOAD_TOKEN)

The upload script was then as follows:

$sourceDirectory = "$env:Agent_TempDirectory"
echo "Source directory: $sourceDirectory"

$matchingFiles = Get-ChildItem -Path "$sourceDirectory" -filter *.trx

# Ensure we have the uploading tool
dotnet tool update BorsukSoftware.Conical.Tools.TRXUploader

if( $LASTEXITCODE -ne 0 ) {
  dotnet new tool-manifest
  dotnet tool install BorsukSoftware.Conical.Tools.TRXUploader
}

foreach ($file in $matchingFiles)
{
  echo "Dealing with $file"

  dotnet tool run BorsukSoftware.Conical.Tools.TRXUploader `
    -server https://conical.yourcompany.com `
    -product productName `
    -source "$sourceDirectory\$file" `
    -token "${env:CONICAL_UPLOAD_TOKEN}" `
    -testRunType "Unit Test" `
    -tag "BuildID-$env:Build_BuildId" `
    -tag "devops" `
    -tag "SourceVersion-$env:Build_SourceVersion"
}

# Ensure we have the ES creation tool
dotnet tool update BorsukSoftware.Conical.Tools.EvidenceSetCreator

if( $LASTEXITCODE -ne 0 ) {
  dotnet new tool-manifest
  dotnet tool install BorsukSoftware.Conical.Tools.EvidenceSetCreator
}

echo "Creating evidence set"
dotnet tool run BorsukSoftware.Conical.Tools.EvidenceSetCreator `
  -server https://conical.yourcompany.com `
  -token "${env:CONICAL_UPLOAD_TOKEN}" `
  -product productName `
  -searchcriteriacount 1 `
  -searchcriteria 0 product "productName" `
  -searchcriteria 0 tag "BuildID-$env:Build_BuildId" `
  -tag "BuildID-$env:Build_BuildId" `
  -tag "devops" `
  -name "Unit Tests" `
  -description "Combined view of all unit tests" `
  -link "Devops Pipeline" "https://xxx.visualstudio.com/xxx/_build/results?buildId=$env:Build_BuildId&view=results" "Link to the pipeline" `
  -link "Source Code" "https://xxx.visualstudio.com/xxx/_git/xxx?version=GB$env:Build_SourceBranchName" "Source Branch"

Note that this code is silently tolerant of upload failures, this is more by oversight rather than explicitly desired. If we wanted to be strict about upload failures, then we would need to check $LASTEXITCODE after each call to the uploader in a similar way as to how we’re doing so for the tool installation.

Summary

After these changes, the client was able to present the results of this portion of their testing in a nice, easy to consume fashion.

The next step for us is to finish the TSR and ES comparison functionality so that the client will be able to see the differences in test populations between multiple runs. This’ll allow their end client to see how the test universe has changed (hopefully in an expansionary way) between 2 different releases.

As usual, if you have any questions, queries or comments about this or any other aspect of automating your testing, then please do get in touch.

Happy testing!