Categories
blog

Introducing Evidence Sets

We’re pleased to announce a new feature that we’re working on – Evidence Sets. The premise here is that this allows a user to group a set of test run sets together to form a single viewable unit which can be used to provide both evidence (hence the name) of testing and also a way to sign off these changes.

Evidence sets can be used in multiple different ways, including:

  • Allowing for failing tests to be re-run if desired
  • Grouping multiple pieces of testing together to have a single reference for test results.

Main features

The main features of evidence sets are:

  • Ability to collate multiple test run sets together, including:
    • optional prefixes to create custom hierarchies
    • subsets of test runs as desired
    • from multiple different products
  • Ability to have multiple test runs contribute to a single test (e.g. to handle re-runs). There are several options (best result, worst result, first result, last result or not allowed) for deciding the state of a test if multiple contributing test runs are specified
  • Ability to mark tests as having passed (or indeed failed) after review, with full role and test node based security model to control who can make these changes

Dogfooding

Internally, we use the evidence set functionality to allow us to have a coherent overview of the state of the application prior to release. For each release, we want to run:

  • integration tests for the API layer
    • For a fresh install
    • For each DB upgrade path
  • integration tests for the DB update functionality
  • integration tests for the fresh install functionality

Additionally, we would like to be able to show the results of the UI testing

Example – API Integration Tests

The API integration tests are designed to check that a given instance of the API performs as expected and will cover everything from uploading results to checking the security model works as expected. Given that we want to ensure that the functionality is correct regardless of whether it’s a fresh install or an upgraded install, we want to run the same set of integration tests against as many combinations as possible. As the running of these tests is highly automated (one just needs to specify the target server and the appropriate admin user to use to start) then these are trivially easy to run and can generate a large number of result sets to analyse.

By using the evidence sets functionality, we can collate all of these result sets into a single display unit so that it’s very easy to get an overview of the state of the release candidate. We do this by using the ‘prefix’ functionality so it’s very clear where there’d be a problem, e.g.

  • api
    • clean
    • upgrades
      • v1
      • v2
      • v3
      • etc.

And then the usual test hierarchy applies underneath each node.

Note that as we wouldn’t release anything which is non-green, we don’t need to leverage the sign-off functionality in evidence sets.

In addition to the functionality above, we then add the installer / upgrade test results to the same evidence set (under appropriate prefixes) so we can demonstrate to the people signing off the release that everything is good.

Test review functionality

Although our internal dogfooding usage pattern doesn’t really require ‘pass after review’ functionality (for us, the functionality either does what the test expects of it or it’s a CI/CD process stopping problem), for a lot of our clients’ use-cases, this ability to sign off on changes (especially numeric changes) is highly valuable.

To this end, when an evidence set is created, the creator can specify the set of roles who can make changes to the state of the test based off of the names of said tests. This allows for ‘sign-off’ functionality to be granted to different categories of users depending on the tests to be signed off.

Future steps

We would like to add web-hooks to the functionality so that further CI/CD steps can be triggered by sign-offs. Note that although this functionality does encapsulates a large portion of the proposed in-tool release sign-off functionality, it’s not intended to replace that.

Summary

We’re putting the final touches to the functionality and we’re hoping to have this work complete in the next week or so and then we’ll make it available to all of our clients in the usual fashion.

In the meantime, if you have any questions, queries or suggestions then please do get in touch with us

Categories
blog

New version released

We’re pleased to announce that a new version of Conical has been released with a few minor bug fixes as well as the ability to see more information about the hosting environment.

As usual, to get started go to our docker page.

Categories
blog

New version released

We’re pleased to announce that we’ve uploaded a new version of Conical to Docker.

This version contains a few minor fixes as well as a small update to the underlying DB schema.

The schema change will be applied by the tool automatically after the container starts up and the super user code is installed (see your container logs for this code).

To get started, go to our Docker page and follow the instructions.

Categories
blog

Uploading from python

One commonly requested feature is being able to upload data from python. Given that all access is via a REST API, this is remarkably easy to do.

Eventually, we would like to add a proper upload / download library for Conical so that not only can people publish their test results from python, but they can also perform programmatic analysis on the data. That is on our book of work, but isn’t currently available.

In the meantime, we’ve put together the following script to allow uploading of data from your projects.

import requests

import enum
from datetime import datetime

class ConicalException(Exception):
    def __init__(self, message):
        self.message = message

class TestRunStatus(enum.Enum):
    unknown = 1
    exception = 2
    failed = 3
    passed = 4

class Product(object):
    def __init__(self, accessLayer, name, description):
        self.accessLayer = accessLayer
        self.name = name
        self.description = description

    def create_testrunset(self, testRunSetName, testRunSetDescription, testRunSetRefDate, testRunSetRunDate, tags = None):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "product": self.name, "name":testRunSetName, "description":testRunSetDescription, "refDate":testRunSetRefDate.strftime("%Y-%m-%d"), "runDate":testRunSetRunDate.strftime( "%Y-%m-%dT%H:%M:%S"), "tags":tags}
        response = requests.post( f"{self.accessLayer.url}/api/upload/CreateTestRunSet", headers = headers, params = queryParameters)
        if response:
            responseJson = response.json()
            trsID = responseJson["id"]
            trsName = responseJson["name"]
            trsDescription = responseJson["description"]
            trsRefDate = datetime.strptime( responseJson["refDate"], "%Y-%m-%dT%H:%M:%S")
            trsRunDate = datetime.strptime( responseJson["runDate"], "%Y-%m-%dT%H:%M:%S")

            retValue = TestRunSet(self.accessLayer, self.name, trsID, trsName, trsDescription, trsRefDate, trsRunDate)
            return retValue
        else:
            raise ConicalException("An exception occurred")

class TestRunSet(object):
    def __init__(self, accessLayer, productName, id, testRunSetName, testRunSetDescription, testRunSetRefDate, testRunSetRunDate):
        self.accessLayer = accessLayer
        self.productName = productName
        self.id = id
        self.name = testRunSetName
        self.description = testRunSetDescription
        self.testRunSetRefDate = testRunSetRefDate
        self.testRunSetRunDate = testRunSetRunDate

    def close(self):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "status": "standard"}
        response = requests.post( f"{self.accessLayer.url}/api/product/{self.productName}/TestRunSet/{self.id}/updateStatus", params = queryParameters, headers=headers)
        if not response:
            raise ConicalException("Unable to close open TRS")

    def create_testrun(self, testRunName, testRunDescription, testRunType, testRunStatus):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "product": self.productName, "testRunSetID": self.id, "name":testRunName, "description":testRunDescription, "testRunType":testRunType, "testStatus": testRunStatus.name}
        response = requests.post( f"{self.accessLayer.url}/api/upload/CreateTestRun", headers = headers, params = queryParameters)
        if response:
            responseJson = response.json()
            retValue = TestRun( self.accessLayer, self.productName, self.id, responseJson ["id"], testRunName, testRunDescription)
            return retValue
        else:
            raise ConicalException( "Unable to create test run")

class TestRun(object):
    def __init__(self, accessLayer, productName, trsID, id, name, description):
        self.accessLayer = accessLayer
        self.productName = productName
        self.trsID = trsID
        self.id = id
        self.name = name
        self.description = description

    def publish_results_text( self, resultsText):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "text"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsText)
        if not response:
            raise ConicalException( "Unable to publish results text" )

    def publish_results_xml( self, resultsXml):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "xml"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsXml)
        if not response:
            raise ConicalException( "Unable to publish results xml" )

    def publish_results_json( self, resultsJson):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "json"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsJson)
        if not response:
            raise ConicalException( "Unable to publish results json" )

    def publish_results_csv( self, resultsCsv):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "style": "csv"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunXsvResults", headers=headers, params = queryParameters, data = resultsCsv)
        if not response:
            raise ConicalException( "Unable to publish results CSV" )

    def publish_results_tsv( self, resultsTsv):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "style": "tsv"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunXsvResults", headers=headers, params = queryParameters, data = resultsTsv)
        if not response:
            raise ConicalException( "Unable to publish results TSV" )

class ConicalAccessLayer(object):
    def __init__(self, url, accessToken = None):
        self.url = url
        self.accessToken = accessToken

    def _makeHeaders(self):
        headers = {}

        if self.accessToken != None:
            headers = { "Authorization": f"Bearer {self.accessToken}"}

        return headers

    def products(self):
        headers = self._makeHeaders()
        response = requests.get( f"{self.url}/api/products", headers = headers)
        
        productsArray = []
        for productJson in response.json():
            productO = Product(self, productJson["name"], productJson ["description"])
            productsArray.append(productO)

        return productsArray

    def get_product(self, productName):
        headers = self._makeHeaders()
        response = requests.get( f"{self.url}/api/product/{productName}", headers=headers)
        if response:
            productJson = response.json()
            p = Product( self, productJson["name"], productJson["description"])
            return p
        else:
            raise ConicalException(f"Unable to fetch '{productName}'")

Using the script is very simple. To do so, you’ll need to create an access token (unless you’ve configured the anonymous user to have write permissions – which we probably don’t recommend) and then:

token = "replace"
accessLayer = ConicalAccessLayer( "https://demo.conical.cloud", token)

dogfoodproduct = accessLayer.get_product( "dogfood-ui")
refDate = datetime(2022, 12, 23)
runDate = datetime(2022, 12, 23, 18, 23, 39)
trs = dogfoodproduct.create_testrunset( "TRS1", "descri", refDate, runDate)
print( f"Created TRS #{trs.id}")
tr = trs.create_testrun( "sample", "sample desc", "temp", TestRunStatus.passed)
print( f"Created TR #{tr.id}")
print( "Uploading test data" )
tr.publish_results_text( "Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool")
tr.publish_results_xml( "<node><subNode1 /><subNode2>bob</subNode2></node>")
tr.publish_results_json( "{ \"bob\": 2.3, \"bib\": null}")
tr.publish_results_csv( "Col #1,Col #2,Col #3,Col #4,Col #5\n234,234,5,7,2\n234,23,41,5,15")
tr.publish_results_tsv( "Col #1\tCol #2\tCol #3\tCol #4\tCol #5\n238\t231\t5\t7\t4")
trs.close()
print( f"Closed trs")

If you have any comments / suggestions on how to improve python support, then please do get in touch with us, either via email, the contact form or in the comments below.

Note that we’re not python experts, so please be gentle with us!

Happy testing.

Categories
blog

New version released

We’re pleased to announce that we’ve uploaded a new version of Conical to Docker for general consumption.

This version contains a few minor fixes and updates

To get started, go to our Docker page and follow the instructions.

Categories
blog

Testing Complex XML

In conversation with a prospect recently, they mentioned that they had a use-case where they were using XML as their communication mechanism but the comparison of the relevant documents wasn’t a simple case of checking each node-attribute etc.

Instead, there was a well defined way to interpret certain nodes (think that a unit of comparison was defined not just a node, but was addressed by the node and several attributes on that node and the value was from another attribute):

Node 1.
<fxvega ccyPair="GBPUSD" ccy="USD" expiry="23-08-2022" value="234.6" />

Node 2.
<fxvega ccyPair="GBPUSD" ccy="USD" expiry="23-07-2022" value="157.1" />

Additionally:

  1. There could be multiple different result nodes of the same type per trade (e.g. different expiries)
  2. There could be multiple different result types, e.g. vega, delta and gamma etc.
  3. Ordering of these was unimportant

To that end, the standard XML flattening plugin that we have wasn’t suitable for their use case. Instead, there was a requirement to write a custom plugin to do this data normalisation and then the rest of the comparison stack could be used as per usual.

Using this hybrid approach, we are able to compare their use-case and create output payloads which are suitable for uploading to Conical and which can be easily consumed by humans.

In their specific example, they wanted to be able to see not just the differences between their items, but also all of the surrounding information so they could import their requests into their existing analysis tools. This meant that the example payload contains quite a bit more information than just the differences.

This could be thought of as then making it more complicated to subsequently see just the differences within Conical, however, by taking advantage of Conical’s ability to have XSLT transforms defined on a per product basis, they’re able to supply different XSLTs to allow them to have custom, interactive, views within the tool.

Specifically, they can have multiple XSLTs defined for their product, one of which, outputting HTML, renders a table of differences on a per trade basis with the ability, using embedded javascript, to have an button to show the surrounding information for ease of importing into their tools.

The full code for this use case is available on GitHub. The sample output looks something like:

Matching items - 2
Additional items - 1
 Item:
 - id = Vanilla-Put-EURGBP-6M-ATM
Missing items - 0
Differences - 1
 Item:
 - id = Vanilla-Put-EURGBP-1M-ATM
 Diffs:
 - risks.fxdelta-EUR: 1342.2 vs. 1342.3
 - risks.fxvega-EURGBP-2022-06-19-GBP: 234 vs. 234.2
Categories
blog

Testing System.Data.*

We’re pleased to announce that we’ve extended the object flattener framework (which feeds into the comparison framework) to handle:

  • System.Data.DataTable
  • System.Data.DataView
  • System.Data.DataSet

This functionality is available in the BorsukSoftware.ObjectFlattener.SystemData Nuget package and is completely free for all use-cases.

Using this new library, it’s possible to form a flattened representation of the above structures, included nested, so that they can be easily compared.

As usual, if you have any questions, queries or suggestions then please do get in touch and we’ll see what we can do for you.

Happy testing!

Categories
blog

Testing arbitrary Json arrays

We recently had a request from a client as to how they could improve their testing of a web service. Prior to each release, they would run a series of tests against the candidate service which were structured as unit tests for ease of running.

The web service would return a Json document representing a simple unordered array of complex objects (each with their own schema) and the expected behaviour was to check that the values hadn’t changed. The only commonality in schema was a key property which should be used for choosing what to compare against what.

Initially, they were doing this via checking the raw text which was returned. When they were different, they had some interesting times understanding where the differences were.

Within the testing framework, we have a method for comparing sets of objects; and within the object flattening framework, we have support for processing Json objects (note that they were already using Json.net so the example uses the same for consistency in their code base).

These can be combined together to perform their comparisons and to get full visibility of all of the differences between the objects.

The interesting part of the code is the following.

// expected = IEnumerable of expected values
// actual = IEnumerable of actual values
var results = setComparer.CompareObjectSets<Newtonsoft.Json.Linq.JToken>((idx, jobject) =>
{
	// We define the keys for comparison as extracting the proeprty on the underlying Json object called 'key'
	var keys = new Dictionary<string, object>();

	if (jobject is Newtonsoft.Json.Linq.JObject jo)
		keys["key"] = jo.Property("key").Value;
	return keys;
},
	expected,
	actual);

The full code can be seen on GitHub here.

Note that in this specific example, the returned Json was a simple array of the objects and therefore the deserialization could be done as a JArray. If the objects to be compared were nested in the schema, then a custom object (using object[] for the objects to be compared) could be used for the deserialization.

This simple snippet allowed them to simplify their investigations in the case of any differences, thus saving them time on each release.

Categories
blog

Version 1 released

We’re pleased to announce that version 1 of Conical has now been released for general consumption. For instructions on how to get started with the tool, please click here.

Version 1 contains all of the features necessary to be able to use the tool to improve your release processes. We are continuing to work hard to develop the next version with additional features for release management and we hope to release this shortly.

In the meantime, if you have any suggestions or requests on features then please do get in touch with us via email or see the product roadmap page to see what features are currently in the works.

Happy testing!