Categories
blog

New version released

We’re pleased to announce that we’ve just released a new version of Conical. The major features are:

  • Ability to search both test run sets and evidence sets by tags through the UI
  • Ability to see the history of a test run
  • Improvements to the UI to stop reloading data

As usual, any comments, questions or feedback, then please do get in touch.

Happy testing

Categories
blog

Creating evidence sets from the command line

As part of the recently added support for evidence sets, we released a .net tool which can be used to create evidence sets from the command line without the need to write a single line of devops code.

The tool – BorsukSoftware.Conical.Tools.EvidenceSetCreator– is available on Nuget and the source is available on GitHub.

Basic Idea

The premise behind the tool is that there’s a programmatic way to identify all of the required inputs to an evidence set, even if this needs to be broken down into multiple criteria.

Following from that principle, the typical workflow is that, for a pipeline which creates the source data, each of the uploaded test run sets is tagged with a unique identifier. Internally, we use ‘ci-%buildNumber%’ but you’re obviously free to come up with something which works for you.

Once you have this identifier defined, it’s trivial to create an additional step in your CI pipeline which runs the tool to create the evidence set. For our internal usage, we have something akin to the following:

dotnet tool run BorsukSoftware.Conical.Tools.EvidenceSetCreator 
  -server https://demo.conical.cloud
  -token "thisIsntOurActualToken"
  -product "dogfood-deployment"
  -searchcriteriacount 2
  -searchcriteria 0 product "dogfood-deployment"
  -searchcriteria 0 tag ci-%build.number%
  -searchcriteria 0 tag api
  -searchcriteria 0 prefix api
  -searchcriteria 1 tag ci-%build.number%
  -searchcriteria 1 tag deployment
  -searchcriteria 1 product "dogfood-deployment"
  -searchcriteria 1 prefix deployment
  -tag ci-%build.number%
  -name "Integration tests"
  -description "Combined view"
  -link "Team City" "%tcLinkRoot%%teamcity.build.id%" "CI job"

This has the following meaning:

  1. There are 2 different sets of test run sets which contribute to our evidence set
    1. Criteria #0 searches for everything in dogfood-deployment which is tagged with both ‘ci-%build.number%’ and ‘api’. These results will have a prefix of ‘api’ (i.e. a test called ‘group1\group2\testName’ would expand to ‘api\group1\group2\testName’ in the evidence set)
    2. Criteria #1 searches for everything in dogfood-deployment which is tagged with both ‘ci-%build.number%’ and ‘deployment’. These tests are then prefixed with ‘deployment’
  2. The generated evidence set will:
    1. be tagged with ‘ci-%build.number%’
    2. have an additional link attached – the URL expands to the actual Teamcity job which generated all of the source data
Getting started

If it’s been a while since you last used .net tools, then full information from Microsoft can be found here.

The very quick ‘getting started’ steps to follow to get you ready are:

# Create the new manifest
dotnet new tool-manifest

# Install the tool
dotnet tool install BorsukSoftware.Conical.Tools.EvidenceSetCreator

Note that if you’re running the above in a CI process, and your tooling reuses the same workspace etc., then the install process won’t necessarily always ensure that the latest version of the tool is available (this caught us out ourselves a few times when we were adding a new feature to the tool!). To handle this, you can also run:

dotnet tool update BorsukSoftware.Conical.Tools.EvidenceSetCreator

As usual, if you have any problems, suggestions or queries about any of this, then please don’t hesitate to get in touch through any of the usual routes.

Happy testing.

Categories
blog

Uploading unit test results

Although Conical has never been intended to replace existing unit test / CI workflow tools, it’s a fairly common for teams to have a series of what are actually integration or regression tests structured as unit tests (if only because it’s rather easy to do).

Obviously, in these circumstances, we would tend to advocate for having a more appropriate, specialised piece of software to handle the different requirements of these. However, we acknowledge that in a lot of circumstances, this might be overkill and as what we’re aiming to do is help you improve your testing at a reasonable cost rather than aim for a prohibitively expensive and unrealistic testing perfection, we’re pragmatic in how we can help people’s existing processes.

To that end, we’ve release a new tool to Nuget BorsukSoftware.Conical.Tools.TRXUploader (Source – GitHub). Full instructions on how to use the tool are provided on the GitHub page.

With this approach, it’s possible to use your existing testing processes etc. but report your results in a nicer, more accessible fashion and then to subsequently improve the generation process if this would be beneficial to your product.

Generating TRX files

To ‘refresh your memory’, it’s very easy to generate a trx file from the command line. Navigate to the directory containing your tests’ project file and run:

dotnet test -r ../testOutput --logger "trx;logfilename=output.trx"

This will generate a trx file in the output directory.

Installing the upload tool

The tool is packaged as a .net tool so you can follow the instructions on MSDN. In short:

  1. Create a tool manifest
  2. Install or update the tool
  3. Run the tool

Note that we would always recommend updating the tool as well in order to pick up the latest version of the tool.

These instructions expand into:

# Create manifest
dotnet new tool-manifest

# Install tool
dotnet tool install BorsukSoftware.Conical.Tools.TRXUploader

# Update tool
dotnet tool update BorsukSoftware.Conical.Tools.TRXUploader

# Run tool
dotnet tool run BorsukSoftware.Conical.Tools.TRXUploader \
  -server https://demo.conical.cloud \
  -product "myProduct" \
  -source "output.trx" \
  -token "noThisIsntOurToken" \
  -tag "local" \
  -tag "example" \
  -testRunType "Unit Test"
Viewing the results

When the results are uploaded to the Conical instance, they will be mapped as one unit test run to one Conical test run, with the tests being subsequently displayed grouped by name (. are treated as hierarchy separators).

The details from the trx file (e.g. the machine details, timings etc.) are uploaded as results XML with any logging output being stored as logs.

Future steps

If you have any suggestions as to how to improve the tool / make it easier to handle your use-case etc. then do get in touch, either with the contact us below or via GitHub.

Happy testing.

Categories
blog

Evidence Sets Released

We’re pleased to announce that we’ve released a new version of Conical containing support for evidence sets. These allow users to be able to have a high level view of the state of their entire release candidate across multiple test run sets and products.

It’s taken a little bit more time than we had originally planned to “dot the ‘i’s and cross the ‘t’s”, but it’s definitely worth the wait. We updated the original implementation to remove the ability to mark tests as ‘pass after review’ (PAR) as the feedback we received was that having an immutable overview was rather useful in its own right. The PAR functionality will be coming soon within the general release approval functionality.

To make it easier to create evidence sets from the command line / CI pipeline, we’ve released a tool on nuget – link – to make it trivially easy to do so without needing to write any code.

We use this tool ourselves in our CI processes prior to release to create an evidence set representing all of the test material that is run against our final candidate docker image, i.e. we can see the whole results of all testing for that package in a single place so that we can be confident that what we’re releasing works.

To get started, simply download Conical and follow the installation instructions. And as always, if you have any requests / comments, please do get in touch with us and we’ll do our very best to help.

Happy testing

Categories
blog

Introducing Evidence Sets

[Updated to reflect change in feature scope following user feedback]

We’re pleased to announce a new feature that we’re working on – Evidence Sets. The premise here is that this allows a user to group a set of test run sets together to form a single viewable unit which can be used to provide evidence (hence the name) of testing.

Evidence sets can be used in multiple different ways, including:

  • Allowing for failing tests to be re-run if desired without having to re-run everything.
  • Grouping multiple pieces of testing together to have a single reference for test results and for end user sign-off.

Main features

The main features of evidence sets are:

  • Ability to collate multiple test run sets together, including:
    • optional prefixes to create custom hierarchies
    • subsets of test runs as desired
    • from multiple different products
  • Ability to have multiple test runs contribute to a single test (e.g. to handle re-runs). There are several options (best result, worst result, first result, last result or not allowed) for deciding the state of a test if multiple contributing test runs are specified

Dogfooding

Internally, we use the evidence set functionality to allow us to have a coherent overview of the state of the application prior to release. For each release, we want to run:

  • integration tests for the API layer
    • For a fresh install
    • For each DB upgrade path
  • integration tests for the DB update functionality
  • integration tests for the fresh install functionality

Additionally, we would like to be able to show the results of the UI testing

Example – API Integration Tests

The API integration tests are designed to check that a given instance of the API performs as expected and will cover everything from uploading results to checking the security model works as expected. Given that we want to ensure that the functionality is correct regardless of whether it’s a fresh install or an upgraded install, we want to run the same set of integration tests against as many combinations as possible. As the running of these tests is highly automated (one just needs to specify the target server and the appropriate admin user to use to start) then these are trivially easy to run and can generate a large number of result sets to analyse.

By using the evidence sets functionality, we can collate all of these result sets into a single display unit so that it’s very easy to get an overview of the state of the release candidate. We do this by using the ‘prefix’ functionality so it’s very clear where there’d be a problem, e.g.

  • api
    • clean
    • upgrades
      • v1
      • v2
      • v3
      • etc.

And then the usual test hierarchy applies underneath each node.

Note that as we wouldn’t release anything which is non-green, we don’t need to leverage the sign-off functionality in evidence sets.

In addition to the functionality above, we then add the installer / upgrade test results to the same evidence set (under appropriate prefixes) so we can demonstrate to the people signing off the release that everything is good.

Summary

We’re putting the final touches to the functionality and we’re hoping to have this work complete in the next week or so and then we’ll make it available to all of our clients in the usual fashion.

In the meantime, if you have any questions, queries or suggestions then please do get in touch with us

Categories
blog

New version released

We’re pleased to announce that a new version of Conical has been released with a few minor bug fixes as well as the ability to see more information about the hosting environment.

As usual, to get started go to our docker page.

Categories
blog

New version released

We’re pleased to announce that we’ve uploaded a new version of Conical to Docker.

This version contains a few minor fixes as well as a small update to the underlying DB schema.

The schema change will be applied by the tool automatically after the container starts up and the super user code is installed (see your container logs for this code).

To get started, go to our Docker page and follow the instructions.

Categories
blog

Uploading from python

One commonly requested feature is being able to upload data from python. Given that all access is via a REST API, this is remarkably easy to do.

Eventually, we would like to add a proper upload / download library for Conical so that not only can people publish their test results from python, but they can also perform programmatic analysis on the data. That is on our book of work, but isn’t currently available.

In the meantime, we’ve put together the following script to allow uploading of data from your projects.

import requests

import enum
from datetime import datetime

class ConicalException(Exception):
    def __init__(self, message):
        self.message = message

class TestRunStatus(enum.Enum):
    unknown = 1
    exception = 2
    failed = 3
    passed = 4

class Product(object):
    def __init__(self, accessLayer, name, description):
        self.accessLayer = accessLayer
        self.name = name
        self.description = description

    def create_testrunset(self, testRunSetName, testRunSetDescription, testRunSetRefDate, testRunSetRunDate, tags = None):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "product": self.name, "name":testRunSetName, "description":testRunSetDescription, "refDate":testRunSetRefDate.strftime("%Y-%m-%d"), "runDate":testRunSetRunDate.strftime( "%Y-%m-%dT%H:%M:%S"), "tags":tags}
        response = requests.post( f"{self.accessLayer.url}/api/upload/CreateTestRunSet", headers = headers, params = queryParameters)
        if response:
            responseJson = response.json()
            trsID = responseJson["id"]
            trsName = responseJson["name"]
            trsDescription = responseJson["description"]
            trsRefDate = datetime.strptime( responseJson["refDate"], "%Y-%m-%dT%H:%M:%S")
            trsRunDate = datetime.strptime( responseJson["runDate"], "%Y-%m-%dT%H:%M:%S")

            retValue = TestRunSet(self.accessLayer, self.name, trsID, trsName, trsDescription, trsRefDate, trsRunDate)
            return retValue
        else:
            raise ConicalException("An exception occurred")

class TestRunSet(object):
    def __init__(self, accessLayer, productName, id, testRunSetName, testRunSetDescription, testRunSetRefDate, testRunSetRunDate):
        self.accessLayer = accessLayer
        self.productName = productName
        self.id = id
        self.name = testRunSetName
        self.description = testRunSetDescription
        self.testRunSetRefDate = testRunSetRefDate
        self.testRunSetRunDate = testRunSetRunDate

    def close(self):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "status": "standard"}
        response = requests.post( f"{self.accessLayer.url}/api/product/{self.productName}/TestRunSet/{self.id}/updateStatus", params = queryParameters, headers=headers)
        if not response:
            raise ConicalException("Unable to close open TRS")

    def create_testrun(self, testRunName, testRunDescription, testRunType, testRunStatus):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "product": self.productName, "testRunSetID": self.id, "name":testRunName, "description":testRunDescription, "testRunType":testRunType, "testStatus": testRunStatus.name}
        response = requests.post( f"{self.accessLayer.url}/api/upload/CreateTestRun", headers = headers, params = queryParameters)
        if response:
            responseJson = response.json()
            retValue = TestRun( self.accessLayer, self.productName, self.id, responseJson ["id"], testRunName, testRunDescription)
            return retValue
        else:
            raise ConicalException( "Unable to create test run")

class TestRun(object):
    def __init__(self, accessLayer, productName, trsID, id, name, description):
        self.accessLayer = accessLayer
        self.productName = productName
        self.trsID = trsID
        self.id = id
        self.name = name
        self.description = description

    def publish_results_text( self, resultsText):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "text"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsText)
        if not response:
            raise ConicalException( "Unable to publish results text" )

    def publish_results_xml( self, resultsXml):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "xml"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsXml)
        if not response:
            raise ConicalException( "Unable to publish results xml" )

    def publish_results_json( self, resultsJson):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "json"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsJson)
        if not response:
            raise ConicalException( "Unable to publish results json" )

    def publish_results_csv( self, resultsCsv):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "style": "csv"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunXsvResults", headers=headers, params = queryParameters, data = resultsCsv)
        if not response:
            raise ConicalException( "Unable to publish results CSV" )

    def publish_results_tsv( self, resultsTsv):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "style": "tsv"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunXsvResults", headers=headers, params = queryParameters, data = resultsTsv)
        if not response:
            raise ConicalException( "Unable to publish results TSV" )

class ConicalAccessLayer(object):
    def __init__(self, url, accessToken = None):
        self.url = url
        self.accessToken = accessToken

    def _makeHeaders(self):
        headers = {}

        if self.accessToken != None:
            headers = { "Authorization": f"Bearer {self.accessToken}"}

        return headers

    def products(self):
        headers = self._makeHeaders()
        response = requests.get( f"{self.url}/api/products", headers = headers)
        
        productsArray = []
        for productJson in response.json():
            productO = Product(self, productJson["name"], productJson ["description"])
            productsArray.append(productO)

        return productsArray

    def get_product(self, productName):
        headers = self._makeHeaders()
        response = requests.get( f"{self.url}/api/product/{productName}", headers=headers)
        if response:
            productJson = response.json()
            p = Product( self, productJson["name"], productJson["description"])
            return p
        else:
            raise ConicalException(f"Unable to fetch '{productName}'")

Using the script is very simple. To do so, you’ll need to create an access token (unless you’ve configured the anonymous user to have write permissions – which we probably don’t recommend) and then:

token = "replace"
accessLayer = ConicalAccessLayer( "https://demo.conical.cloud", token)

dogfoodproduct = accessLayer.get_product( "dogfood-ui")
refDate = datetime(2022, 12, 23)
runDate = datetime(2022, 12, 23, 18, 23, 39)
trs = dogfoodproduct.create_testrunset( "TRS1", "descri", refDate, runDate)
print( f"Created TRS #{trs.id}")
tr = trs.create_testrun( "sample", "sample desc", "temp", TestRunStatus.passed)
print( f"Created TR #{tr.id}")
print( "Uploading test data" )
tr.publish_results_text( "Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool")
tr.publish_results_xml( "<node><subNode1 /><subNode2>bob</subNode2></node>")
tr.publish_results_json( "{ \"bob\": 2.3, \"bib\": null}")
tr.publish_results_csv( "Col #1,Col #2,Col #3,Col #4,Col #5\n234,234,5,7,2\n234,23,41,5,15")
tr.publish_results_tsv( "Col #1\tCol #2\tCol #3\tCol #4\tCol #5\n238\t231\t5\t7\t4")
trs.close()
print( f"Closed trs")

If you have any comments / suggestions on how to improve python support, then please do get in touch with us, either via email, the contact form or in the comments below.

Note that we’re not python experts, so please be gentle with us!

Happy testing.

Categories
blog

New version released

We’re pleased to announce that we’ve uploaded a new version of Conical to Docker for general consumption.

This version contains a few minor fixes and updates

To get started, go to our Docker page and follow the instructions.

Categories
blog

Testing Complex XML

In conversation with a prospect recently, they mentioned that they had a use-case where they were using XML as their communication mechanism but the comparison of the relevant documents wasn’t a simple case of checking each node-attribute etc.

Instead, there was a well defined way to interpret certain nodes (think that a unit of comparison was defined not just a node, but was addressed by the node and several attributes on that node and the value was from another attribute):

Node 1.
<fxvega ccyPair="GBPUSD" ccy="USD" expiry="23-08-2022" value="234.6" />

Node 2.
<fxvega ccyPair="GBPUSD" ccy="USD" expiry="23-07-2022" value="157.1" />

Additionally:

  1. There could be multiple different result nodes of the same type per trade (e.g. different expiries)
  2. There could be multiple different result types, e.g. vega, delta and gamma etc.
  3. Ordering of these was unimportant

To that end, the standard XML flattening plugin that we have wasn’t suitable for their use case. Instead, there was a requirement to write a custom plugin to do this data normalisation and then the rest of the comparison stack could be used as per usual.

Using this hybrid approach, we are able to compare their use-case and create output payloads which are suitable for uploading to Conical and which can be easily consumed by humans.

In their specific example, they wanted to be able to see not just the differences between their items, but also all of the surrounding information so they could import their requests into their existing analysis tools. This meant that the example payload contains quite a bit more information than just the differences.

This could be thought of as then making it more complicated to subsequently see just the differences within Conical, however, by taking advantage of Conical’s ability to have XSLT transforms defined on a per product basis, they’re able to supply different XSLTs to allow them to have custom, interactive, views within the tool.

Specifically, they can have multiple XSLTs defined for their product, one of which, outputting HTML, renders a table of differences on a per trade basis with the ability, using embedded javascript, to have an button to show the surrounding information for ease of importing into their tools.

The full code for this use case is available on GitHub. The sample output looks something like:

Matching items - 2
Additional items - 1
 Item:
 - id = Vanilla-Put-EURGBP-6M-ATM
Missing items - 0
Differences - 1
 Item:
 - id = Vanilla-Put-EURGBP-1M-ATM
 Diffs:
 - risks.fxdelta-EUR: 1342.2 vs. 1342.3
 - risks.fxvega-EURGBP-2022-06-19-GBP: 234 vs. 234.2