Within Conical, there are several pieces of terminology which are used. The aim of this page is to try to explain all of those terms to avoid confusion.
If there are any terms that we’ve not covered, then please let us know and we’ll update accordingly.
Results functionality is grouped into components, e.g. ResultsXml, ResultsJson, AdditionalFiles etc. Components represent the smallest unit of results within a test run.
An evidence set is a logical, immutable collection of test run sets. They can be created from TRSs from multiple products and are intended to provide an overview of the testing state of your system, e.g. if you have both integration and regression tests for which you’re capturing data, then an evidence set can be created which allows an overview of both types of testing.
An evidence set is represented by a series of tests. These tests map to test runs from the source TRSs. Where multiple test runs contribute to an ES’s test, then the final result will be determined according to the flag specified when the ES is created. Options include:
- Don’t allow
- Use best result
- Use worst result
- Use first result
- Use last result
A product is the basic unit of configuration within Conical. There are no limits on the number of products which can be created in an instance, so we would recommend creating one product per use-case.
Test run sets and evidence sets can be defined with an optional, arbitrary set of tags. These are purely metadata. We typically expect them to be used to simplify the process of automation by providing an additional identifying property for a given test run.
Test Run Set
A test run set represents the running of a set of test runs. This is the typical unit that end users will start their interactions with before drilling down into individual test runs.
Note that test run sets may be tagged with a set of optional and arbitrary tags to assist in the search process / identifying the purpose behind a test run set.
Test Run Type
A test run type is a configuration unit representing information about a type of test (each test run has a type). These can be used to define which result types should be displayed for a given test and there are no limits to the number of test run types, i.e. if you have 2 different logical types of tests, 1 outputting their results as Xml and 1 as Json, then it’s recommended to create 2 different test run types, 1 configured to show Json and 1 configured to show Xml, so that viewers know precisely what to drill down into.
A test run is the result of having run a single test. They are collated into test run sets (see above). They are the smallest unit of work. They contain 1 or more result components and have a status associated with them.