FAQs

Here are a series of commonly asked FAQs, if you have a question which isn’t answered here, then please get in touch with us.

General

What is Conical?

Conical is a suite of tools to make it easier to understand the impact of proposed code, process or config changes to systems and to subsequently sign them off ready for release. It is mainly targetted at higher level components / systems where a wide range of inputs can have an impact on the overall output of the system and these impacts need to be analysed prior to release.

The main visible component is a website (demo) which allows you to view the impact of proposed changes. We also provide a range of tools to generate the differences for display and analysis.

In addition to allowing users to view / analyse any differences caused by a change, another major use-case of Conical is as a centralised repository / audit trail of test results. This allows an end user to be able to see that their use-cases have been been tested and therefore have greater confidence of the impact of any change and therefore more willing to sign off the change and get it into production soon.

Explicitly, it is not a replacement for your CI system nor having suitable unit tests.

How do I get started?
We have full instructions here.
Why should I use the tool when I can use unit tests?

Conical and unit tests aren’t looking to solve the same problem. Unit tests are fantastic for when the total range of inputs is known and there’s a well defined and previously known answer of what is right. We use unit tests heavily within Conical to ensure code quality and strongly recommend having a suitable suite of unit tests for your own projects.

However, from a high level system’s perspective, unit tests by themselves do not solve all of the testing requirements. There will also be the need for integration tests to test interactions between the components as well as regression tests to handle the validation of the system against larger volumes of data, e.g. for a risk management system, one would typically wish to perform regression testing on whole, real world, portfolios. What testing is required is a function of the product being tested, e.g. the testing requirements for Conical are very different to those of a risk management system.

For a lot of changes to the system, whether they are code, config or process changes, there will be both an expected set of impacts as well as portions of the system where no impact is expected. Conical assists in this process by making it very simple to be able to see the impact of the change as well as confirming the lack of impact in other areas. With this visibility, it’s easier for both developers and end users to sign off on the proposed changes.

I’m running my integration / regression tests as unit tests, how can Conical help?

Where your organisation already has integration / regression tests but they’re structured as unit tests, we can help by providing a central location to view the results of these across your whole product. This can be done by using the trx uploader (see this blog post post for details) for each of your test sets and then the evidence sets functionality to provide a high level overview. All of these can be done directly from the command line without the need for any further coding.

Once that initial process improvement has been made, then you can make a decision, appropriate for your organisation, as to whether or not it’s worth updating your testing tools to a more functional infrastructure (e.g. one which supplies the ability to provide more diagnostics when there’s a failure / reporting of differences etc.). Note that there’s nothing which requires this process to be an ‘all or nothing’ discussion. By using evidence sets, it’s possible to have a testing overview combining testing evidence from multiple sources / processes.

Why should I use the tool when I’m already using CI?

Conical is not intended to replace your existing CI system and never will be. The typical use-case is that the CI process is extended to run more integration / regression tests and publish the results to Conical for analysis, so that in the case of differences, decision makers can decide whether or not the impact is correct.

For more information on how Conical has been integrated with CI systems click here or case studies, click here.

Why do you use the term differences rather than test failures?

Conical is designed to help people understand the impact of proposed changes to their system rather than having explicitly pass or fail tests where any change is treated as a failure. If your use-case is the latter, then unit tests may well be a better option for you.

To that end, the aim of the tool is to make it as easy as possible for people to see the impact of the change so they can make a conscious decision as to the whether or not the impact is acceptable / intended before releasing their change.

My application is in Excel, can I use the tool?

Yes. We have an Excel addin available to upload data to the Conical server which can be used.

Typically when we have an Excel sheet to be tested, we do this by constructing an additional sheet using VBA which can repeatedly:

  1. Load up the sheet be tested
  2. Set the appropriate input range – e.g. portfolio / date selection
  3. Calculate the sheet
  4. Extract out the results range to the main automation sheet
  5. Source the set of expected results for comparison (this can be from reading in a file, sourcing from a DB, running the official version of the sheet etc.)
  6. Perform the comparison in the automation sheet (standard Excel functionality) and store the results in a named range
  7. Decide based off the comparison whether or not that test should be treated as a pass or a failure

Once all of the combinations have been run, then the Excel addin can be used to upload all of this data to Conical and the differences can be analysed / used for audit trail purposes.

We will be uploading a sample of this use-case shortly. If this is your use-case and you need assistance prior to that, please contact us – services@conical.cloud.

My application is python based, can I use the tool?

Yes.

We’ve put together a small access component for uploading data to the tool. This can be found on this blog post.

If you have additional requirements for ancillary data then please contact us so we can add official support for your data types.

My application is Java etc. based, can I use the tool?

Yes.

All communication with the tool is via a REST API with the API description being publicly available (Click here for the demo instance’s swagger endpoint) so any language can be used to upload / download data.

Currently we have support for ancillary test data such as .net assemblies and memory usage. If your requirements are different, then contact us and we can add your additional requirements to the list of supported components.

We do manual testing, why would we need Conical?

The general aim of Conical is to speed up / improve release processes and there are multiple ways that Conical can help manual test processes.

Initially, the tool can be used as a results store / audit trail of results to provide test evidence for your users. Subsequently the testing processes can be improved to add more automation thereby reducing the [human] time taken to perform the testing.

We’ve used this approach for confirming that the manual testing has been done for certain, non-Conical related, projects where automated testing hasn’t been feasible or hasn’t been implemented yet. We do this by maintaining a spreadsheet of the manual tests which are to be performed and then we upload directly from the sheet using our Excel addin.

Clearly, whether or not it’s worthwhile for your use-case depends on your use-case / release requirements.

We can’t afford to automate all of our testing, how does Conical help?

At Conical, we realise that not every organisation can afford to invest large amounts of time into having fully automated testing for their products. Some times this’ll be because they don’t have enough resources to do so or simply that it’s not of sufficient importance to the organisation that a specific item is fully tested.

To that end, we usually end up recommending a hybrid approach whereby the product is split into logical portions:

  • Those with easily automated testing
  • Those tested manually
  • Those untested (we’ll ignore this)

For the automated portion, results can be uploaded to Conical through the usual mechanisms.

For the manual portion, we’d typically recommend having a ‘testing checklist’ which can be broken down into a series of, at least conceptual, tests. The results of these can then be tracked (we use Excel) for a given release and subsequently uploaded to Conical (literally as a series of test runs with a test status and optionally an entry in logs or results text). We do the upload using a customised Excel add-in which we’re hoping to release shortly.

Note that all of these uploaded test run sets would be tagged with an identifying tag(s).

Once the results have been uploaded to the tool for a given candidate release, then an evidence set would be created (either through the UI or programmatically) containing all of these results which can be used to allow release decisions to be made.

We want to be able to access our test results whilst on the move, is there an app?

There is no mobile app planned. However, the actual website should be 100% mobile compatible / friendly / responsive.

If you find that there are issues, then please get in touch with us and we’ll rectify accordingly.

We’re not very technical but would like to improve our testing, can you help?

Yes. In addition to the software, we also provide a range of custom consultancy services to help your business. To enquire about these services, please contact us – services@conical.cloud.

What features are coming next?

For a description of our road map, please see here.

Note that if you have any requests for features etc. then please do let us know – suggestions@conical.cloud

Data security

Our data is sensitive, we don’t want it to leave our network

It won’t.

Conical is self-hosted so you host the tool on your network and your data only travels between your hosting Docker container and the SQL server that you specify. Other than that, it is not transmitted to anyone. This means that you do not have to worry about your data being leaked by us because we never see (and wouldn’t want to see) your data.

If your organisational security requirements require it, we’re happy for your security staff to audit the source code subject to the usual NDAs etc.

Internally, how we do separate data out for different teams?

Conical works on the basis of user defined products. These are logical groupings and typically represent a particular end user requirement (or a larger component in multi-tiered applications). The general approach is for each team to have 1 or more products which they can use to store their data.

Each of these products can be permissioned individually using a role based security model. Once the appropriate roles have been created, then these can be assigned to different users or groups to get the desired granularity.

Purchasing

Can we get an evaluation licence?

Yes. We would encourage everyone to try before they buy to ensure that the tool is suitable for their use-case.

If you have any questions as to whether or not the tool is suitable for your use-case, then please get in touch – contactus@conical.cloud

To get started with the evaluation licence, simply click here.

How do I purchase licences?
Full details on purchase arrrangements and pricing are available here.

Hardware requirements

What are the requirements for Conical?

In order to run Conical, you need access to a way of running a linux Docker container, a SQL server instance and optionally a drive to store the actual data on. The required size of these components is a function of both the number of users and the volume of data being uploaded.

In general, as the majority of the computationally expensive calculations (the actual comparisons) are performed as part of the CI process and not on the Conical server, then the required specs are actually quite low and it should be treated as a standard web site for performance planning.

The DB itself can either be a locally hosted SQL server instance (2016 onwards) or a cloud hosted SQL server instance. The major limiting factor for write performance tends to be the underlying DB, however, even this isn’t required to be particularly powerful so we would start small and expand as necessary rather than overprovisioning at the beginning.

FWIW For the demo instance, we’re using an Azure B2 machine with a basic level DB on the same network and the performance is more than adequate. If using a remote DB and a locally hosted server then each query will be sensitive to the latency between the 2 machines, but this isn’t usually a problem.

Generation

How do I generate the test results?

The short answer is that’s entirely up to you as every product is different and what works for one may not work for another.

In general, you will need to either create a new console app to drive the process of sourcing the numbers for comparison (candidate vs. expected) or to add an additional feature to your existing app to push to the tool. Note that for integration with CI/CD processes, this should be driveable through the command line.

However, given that there are obviously overlaps between various use-cases, we provide a series of utilities (currently .net only) to simplify the process of performing comparisons. These are:

  • Object comparison – Nuget
  • Object flattener – Nuget

Note that if you’re comparing the results of webservices (regardless of what language the original was written in), then the above will work very well for you.

How do I specify tolerances?

There are 2 places where tolerances can be used:

  1. The generation process
  2. Viewing results

As the generation process is custom per client, there’s no one standard answer here. However, if you’re using the provided comparison libraries, then they have a built in tolerance functionality which can be used to allow for tolerances.

Depending on the nature of the results payload, there are various options for performing transformations (XSLT for XML) / filters (CSV / TSV data) on the data being displayed. Tolerances can be specified here to allow better analysis.

We have some sample code on GitHub which demonstrates how to compare some arbitrarily chosen data structures and then upload them to an instance. The code can be found here.

How do I store information which isn’t covered by an existing component type?

In the short term, generate a file containing the appropriate information and add it to the test run as an additional file. This is typically done with a user chosen ‘by convention’ file name so that it’s easy for downstream consumers to subsequently process if necessary.

In addition, contact us with your requirements and we’ll see what we can do.

How do I upload data to the tool?

From a technical perspective, data is uploaded to the tool via a REST api or via language specific access libraries (see below for more details).

In general. there are 2 approaches which can be taken:

  1. Updating your existing applications to have a button to upload the results of your analysis (Excel is also supported).
  2. Creating a custom app (usually console app to allow being driven through CI) to automate the comparisons and then publish to the tool for later manual analysis of any actual differences if necessary.

Which approach is most suitable for you depends heavily on your existing processes.

How do I integrate the tool into my CI/CD process?

The tool is designed to work as part of your CI/CD process. The assumption is that your tests can be run automatically from the command line and therefore the running of your regression tests can be added to existing pipelines / new pipelines.

There is more information as to various approaches to take here.

We have data in Excel, how do we do upload from Excel?

We have an Excel addin which we plan on making available for download shortly. From here, it’s possible to push data straight from Excel.

We’re doing a systems migration and have multiple data schemas, what do we do?

In this example, the comparison app would be responsible for normalising the data structures so that the differing formats can be compared. What the correct format of choice here is up to you as it’s your data. Note that this format doesn’t have to be either of the input formats, merely something which can be comparable.

Note that in these cases, we would typically upload the source files from both sources as additional files for the various test runs so that information was not lost in the case that differences were found.

If you would like some assistance with this, then please do get in touch with us – services@conical.cloud.

Evidence Sets

We have multiple sources of test data, how do we consolidate them?

Conical has the concept of ‘evidence sets’. These are immutable collections of test run sets which can be defined by using inputs from a multiple products etc. We use them internally to provide a high level overview of all of the testing evidence for a given release.

Evidence sets can be created in multiple ways:

They also facilitate the case where a test may have been re-run due to temporary problems (e.g. being unable to connect to an external source for one of the tests) without requiring all of the tests to be re-run. Where multiple contributing test runs are found, then a few rules can be applied according to your use-case:

  • Use best result
  • Use worst result
  • Use first result
  • Use last result
  • Not allowed

Note that there’s no requirement for the source test run sets to be of the same type, any result sets can be collated together, even across multiple different products.

Access

How do I programmatically access the data?

All of the site’s functionality is available programmatically through the REST api. This is fully documented on each instance through the ‘/swagger’ URL (Demo instance).

To access the site’s functionality programmatically, you will need to create an access token (through the profile page) and add this as a header (Authorization = ‘Bearer {token}’) to your requests.

For .net clients, there’s a nuget packge – BorsukSoftware.Conical.Client – which can be used to upload / download data etc. The source code for this library can be found on GitHub.

We are looking to create more client access libraries for different languages. If this is something that is required for your use-case, then please get in touch and we’ll increase its priority in the job queue.

Regression testing

We’re using the tool to view our regression tests, how do we simplify our update process?

When the tool is being used to view regression test results, then best practice is to upload additional information to every failing test containing the updated set of results to be used in the case that you are happy that the new version is correct. By naming these files in a consistent manner (or by having an additional file at the test run set level detailing what files should be updated for which test) then this information can be downloaded programmatically and applied to your original data source (e.g. source control) without the need for you to do this manually as an end user.

This updating process can be driven by the CI/CD system as desired to have it as automated as possible.

Note that we’re working on adding release sign-offs to the tool so that once all approvals have been made, then a web-hook can be fired which will allow the triggering, as appropriate, of the aforementioned CI job.

Terminology

What do the terms mean?
A full description of the terms can be found here.

Others

If all the comparison functionality is available outside of the website, then why do I need the website?

The use of the website is optional if, as an organisation, there’s already a good build-test-release workflow process, especially around the analysis / sign-off of differences. However, the majority of organisations do not have these and as such there isn’t always good visibility on differences for release managers / product owners to be able to review / make decisions on.

Note that it’s perfectly possible to have the data from your Conical instance feed into your existing processes etc. The basic premise that we have is ‘it’s your data, we want it to be more visible to you’. Full Documentation of what’s supported is available through looking at the swagger page (demo).

I have a suggestion for how to make the tool better, what do I do?
Please contact us – suggestions@conical.cloud