Categories
blog

Introducing Evidence Sets

[Updated to reflect change in feature scope following user feedback]

We’re pleased to announce a new feature that we’re working on – Evidence Sets. The premise here is that this allows a user to group a set of test run sets together to form a single viewable unit which can be used to provide evidence (hence the name) of testing.

Evidence sets can be used in multiple different ways, including:

  • Allowing for failing tests to be re-run if desired without having to re-run everything.
  • Grouping multiple pieces of testing together to have a single reference for test results and for end user sign-off.

Main features

The main features of evidence sets are:

  • Ability to collate multiple test run sets together, including:
    • optional prefixes to create custom hierarchies
    • subsets of test runs as desired
    • from multiple different products
  • Ability to have multiple test runs contribute to a single test (e.g. to handle re-runs). There are several options (best result, worst result, first result, last result or not allowed) for deciding the state of a test if multiple contributing test runs are specified

Dogfooding

Internally, we use the evidence set functionality to allow us to have a coherent overview of the state of the application prior to release. For each release, we want to run:

  • integration tests for the API layer
    • For a fresh install
    • For each DB upgrade path
  • integration tests for the DB update functionality
  • integration tests for the fresh install functionality

Additionally, we would like to be able to show the results of the UI testing

Example – API Integration Tests

The API integration tests are designed to check that a given instance of the API performs as expected and will cover everything from uploading results to checking the security model works as expected. Given that we want to ensure that the functionality is correct regardless of whether it’s a fresh install or an upgraded install, we want to run the same set of integration tests against as many combinations as possible. As the running of these tests is highly automated (one just needs to specify the target server and the appropriate admin user to use to start) then these are trivially easy to run and can generate a large number of result sets to analyse.

By using the evidence sets functionality, we can collate all of these result sets into a single display unit so that it’s very easy to get an overview of the state of the release candidate. We do this by using the ‘prefix’ functionality so it’s very clear where there’d be a problem, e.g.

  • api
    • clean
    • upgrades
      • v1
      • v2
      • v3
      • etc.

And then the usual test hierarchy applies underneath each node.

Note that as we wouldn’t release anything which is non-green, we don’t need to leverage the sign-off functionality in evidence sets.

In addition to the functionality above, we then add the installer / upgrade test results to the same evidence set (under appropriate prefixes) so we can demonstrate to the people signing off the release that everything is good.

Summary

We’re putting the final touches to the functionality and we’re hoping to have this work complete in the next week or so and then we’ll make it available to all of our clients in the usual fashion.

In the meantime, if you have any questions, queries or suggestions then please do get in touch with us

Categories
blog

New version released

We’re pleased to announce that a new version of Conical has been released with a few minor bug fixes as well as the ability to see more information about the hosting environment.

As usual, to get started go to our docker page.

Categories
blog

New version released

We’re pleased to announce that we’ve uploaded a new version of Conical to Docker.

This version contains a few minor fixes as well as a small update to the underlying DB schema.

The schema change will be applied by the tool automatically after the container starts up and the super user code is installed (see your container logs for this code).

To get started, go to our Docker page and follow the instructions.

Categories
blog

Uploading from python

One commonly requested feature is being able to upload data from python. Given that all access is via a REST API, this is remarkably easy to do.

Eventually, we would like to add a proper upload / download library for Conical so that not only can people publish their test results from python, but they can also perform programmatic analysis on the data. That is on our book of work, but isn’t currently available.

In the meantime, we’ve put together the following script to allow uploading of data from your projects.

import requests

import enum
from datetime import datetime

class ConicalException(Exception):
    def __init__(self, message):
        self.message = message

class TestRunStatus(enum.Enum):
    unknown = 1
    exception = 2
    failed = 3
    passed = 4

class Product(object):
    def __init__(self, accessLayer, name, description):
        self.accessLayer = accessLayer
        self.name = name
        self.description = description

    def create_testrunset(self, testRunSetName, testRunSetDescription, testRunSetRefDate, testRunSetRunDate, tags = None):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "product": self.name, "name":testRunSetName, "description":testRunSetDescription, "refDate":testRunSetRefDate.strftime("%Y-%m-%d"), "runDate":testRunSetRunDate.strftime( "%Y-%m-%dT%H:%M:%S"), "tags":tags}
        response = requests.post( f"{self.accessLayer.url}/api/upload/CreateTestRunSet", headers = headers, params = queryParameters)
        if response:
            responseJson = response.json()
            trsID = responseJson["id"]
            trsName = responseJson["name"]
            trsDescription = responseJson["description"]
            trsRefDate = datetime.strptime( responseJson["refDate"], "%Y-%m-%dT%H:%M:%S")
            trsRunDate = datetime.strptime( responseJson["runDate"], "%Y-%m-%dT%H:%M:%S")

            retValue = TestRunSet(self.accessLayer, self.name, trsID, trsName, trsDescription, trsRefDate, trsRunDate)
            return retValue
        else:
            raise ConicalException("An exception occurred")

class TestRunSet(object):
    def __init__(self, accessLayer, productName, id, testRunSetName, testRunSetDescription, testRunSetRefDate, testRunSetRunDate):
        self.accessLayer = accessLayer
        self.productName = productName
        self.id = id
        self.name = testRunSetName
        self.description = testRunSetDescription
        self.testRunSetRefDate = testRunSetRefDate
        self.testRunSetRunDate = testRunSetRunDate

    def close(self):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "status": "standard"}
        response = requests.post( f"{self.accessLayer.url}/api/product/{self.productName}/TestRunSet/{self.id}/updateStatus", params = queryParameters, headers=headers)
        if not response:
            raise ConicalException("Unable to close open TRS")

    def create_testrun(self, testRunName, testRunDescription, testRunType, testRunStatus):
        headers = self.accessLayer._makeHeaders()
        queryParameters = { "product": self.productName, "testRunSetID": self.id, "name":testRunName, "description":testRunDescription, "testRunType":testRunType, "testStatus": testRunStatus.name}
        response = requests.post( f"{self.accessLayer.url}/api/upload/CreateTestRun", headers = headers, params = queryParameters)
        if response:
            responseJson = response.json()
            retValue = TestRun( self.accessLayer, self.productName, self.id, responseJson ["id"], testRunName, testRunDescription)
            return retValue
        else:
            raise ConicalException( "Unable to create test run")

class TestRun(object):
    def __init__(self, accessLayer, productName, trsID, id, name, description):
        self.accessLayer = accessLayer
        self.productName = productName
        self.trsID = trsID
        self.id = id
        self.name = name
        self.description = description

    def publish_results_text( self, resultsText):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "text"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsText)
        if not response:
            raise ConicalException( "Unable to publish results text" )

    def publish_results_xml( self, resultsXml):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "xml"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsXml)
        if not response:
            raise ConicalException( "Unable to publish results xml" )

    def publish_results_json( self, resultsJson):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "resultType": "json"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunResults", headers=headers, params = queryParameters, data = resultsJson)
        if not response:
            raise ConicalException( "Unable to publish results json" )

    def publish_results_csv( self, resultsCsv):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "style": "csv"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunXsvResults", headers=headers, params = queryParameters, data = resultsCsv)
        if not response:
            raise ConicalException( "Unable to publish results CSV" )

    def publish_results_tsv( self, resultsTsv):
        headers = self.accessLayer._makeHeaders()
        headers [ "Content-Type"] = "text/plain"
        queryParameters = { "product": self.productName, "testRunSetID":self.trsID, "testRunID":self.id, "style": "tsv"}
        response = requests.post( f"{self.accessLayer.url}/api/upload/publishTestRunXsvResults", headers=headers, params = queryParameters, data = resultsTsv)
        if not response:
            raise ConicalException( "Unable to publish results TSV" )

class ConicalAccessLayer(object):
    def __init__(self, url, accessToken = None):
        self.url = url
        self.accessToken = accessToken

    def _makeHeaders(self):
        headers = {}

        if self.accessToken != None:
            headers = { "Authorization": f"Bearer {self.accessToken}"}

        return headers

    def products(self):
        headers = self._makeHeaders()
        response = requests.get( f"{self.url}/api/products", headers = headers)
        
        productsArray = []
        for productJson in response.json():
            productO = Product(self, productJson["name"], productJson ["description"])
            productsArray.append(productO)

        return productsArray

    def get_product(self, productName):
        headers = self._makeHeaders()
        response = requests.get( f"{self.url}/api/product/{productName}", headers=headers)
        if response:
            productJson = response.json()
            p = Product( self, productJson["name"], productJson["description"])
            return p
        else:
            raise ConicalException(f"Unable to fetch '{productName}'")

Using the script is very simple. To do so, you’ll need to create an access token (unless you’ve configured the anonymous user to have write permissions – which we probably don’t recommend) and then:

token = "replace"
accessLayer = ConicalAccessLayer( "https://demo.conical.cloud", token)

dogfoodproduct = accessLayer.get_product( "dogfood-ui")
refDate = datetime(2022, 12, 23)
runDate = datetime(2022, 12, 23, 18, 23, 39)
trs = dogfoodproduct.create_testrunset( "TRS1", "descri", refDate, runDate)
print( f"Created TRS #{trs.id}")
tr = trs.create_testrun( "sample", "sample desc", "temp", TestRunStatus.passed)
print( f"Created TR #{tr.id}")
print( "Uploading test data" )
tr.publish_results_text( "Booooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooool")
tr.publish_results_xml( "<node><subNode1 /><subNode2>bob</subNode2></node>")
tr.publish_results_json( "{ \"bob\": 2.3, \"bib\": null}")
tr.publish_results_csv( "Col #1,Col #2,Col #3,Col #4,Col #5\n234,234,5,7,2\n234,23,41,5,15")
tr.publish_results_tsv( "Col #1\tCol #2\tCol #3\tCol #4\tCol #5\n238\t231\t5\t7\t4")
trs.close()
print( f"Closed trs")

If you have any comments / suggestions on how to improve python support, then please do get in touch with us, either via email, the contact form or in the comments below.

Note that we’re not python experts, so please be gentle with us!

Happy testing.

Categories
blog

New version released

We’re pleased to announce that we’ve uploaded a new version of Conical to Docker for general consumption.

This version contains a few minor fixes and updates

To get started, go to our Docker page and follow the instructions.

Categories
blog

Testing System.Data.*

We’re pleased to announce that we’ve extended the object flattener framework (which feeds into the comparison framework) to handle:

  • System.Data.DataTable
  • System.Data.DataView
  • System.Data.DataSet

This functionality is available in the BorsukSoftware.ObjectFlattener.SystemData Nuget package and is completely free for all use-cases.

Using this new library, it’s possible to form a flattened representation of the above structures, included nested, so that they can be easily compared.

As usual, if you have any questions, queries or suggestions then please do get in touch and we’ll see what we can do for you.

Happy testing!

Categories
blog

Version 1 released

We’re pleased to announce that version 1 of Conical has now been released for general consumption. For instructions on how to get started with the tool, please click here.

Version 1 contains all of the features necessary to be able to use the tool to improve your release processes. We are continuing to work hard to develop the next version with additional features for release management and we hope to release this shortly.

In the meantime, if you have any suggestions or requests on features then please do get in touch with us via email or see the product roadmap page to see what features are currently in the works.

Happy testing!