FAQs

Here are a series of commonly asked FAQs, if you have a question which isn’t answered here, then please get in touch with us.

General

What is Conical?

Conical is a suite of tools to make it easier to understand the impact of proposed code, process or config changes to systems and to subsequently sign them off ready for release. It is mainly targetted at higher level components / systems where a wide range of inputs can have an impact on the overall output of the system and these impacts need to be analysed prior to release.

The main visible component is a website (demo) which allows you to view the impact of proposed changes. We also provide a range of tools to generate the differences for display and analysis.

In addition to allowing users to view / analyse any differences caused by a change, another major use-case of Conical is as a centralised repository / audit trail of test results. This allows an end user to be able to see that their use-cases have been been tested and therefore have greater confidence of the impact of any change and therefore more willing to sign off the change and get it into production soon.

Explicitly, it is not a replacement for your CI system nor having suitable unit tests.

How do I get started?
We have full instructions here.
Why should I use the tool when I can use unit tests?

Conical and unit tests aren’t looking to solve the same problem. Unit tests are fantastic for when the total range of inputs is known and there’s a well defined and previously known answer of what is right. We use unit tests heavily within Conical to ensure code quality and strongly recommend having a suitable suite of unit tests for your own projects.

However, from a high level system’s perspective, unit tests by themselves do not solve all of the testing requirements. There will also be the need for integration tests to test interactions between the components as well as regression tests to handle the validation of the system against larger volumes of data, e.g. for a risk management system, one would typically wish to perform regression testing on whole, real world, portfolios. What testing is required is a function of the product being tested, e.g. the testing requirements for Conical are very different to those of a risk management system.

For a lot of changes to the system, whether they are code, config or process changes, there will be both an expected set of impacts as well as portions of the system where no impact is expected. Conical assists in this process by making it very simple to be able to see the impact of the change as well as confirming the lack of impact in other areas. With this visibility, it’s easier for both developers and end users to sign off on the proposed changes.

Why should I use the tool when I’m already using CI?

Conical is not intended to replace your existing CI system and never will be. The typical use-case is that the CI process is extended to run more integration / regression tests and publish the results to Conical for analysis, so that in the case of differences, decision makers can decide whether or not the impact is correct.

For more information on how Conical has been integrated with CI systems click here or case studies, click here.

Why do you use the term differences rather than test failures?

Conical is designed to help people understand the impact of proposed changes to their system rather than having explicitly pass or fail tests where any change is treated as a failure. If your use-case is the latter, then unit tests may well be a better option for you.

To that end, the aim of the tool is to make it as easy as possible for people to see the impact of the change so they can make a conscious decision as to the whether or not the impact is acceptable / intended before releasing their change.

My application is in Excel, can I use the tool?

Yes. We have an Excel addin available to upload data to the Conical server which can be used.

Typically when we have an Excel sheet to be tested, we do this by constructing an additional sheet using VBA which can repeatedly:

  1. Load up the sheet be tested
  2. Set the appropriate input range – e.g. portfolio / date selection
  3. Calculate the sheet
  4. Extract out the results range to the main automation sheet
  5. Source the set of expected results for comparison (this can be from reading in a file, sourcing from a DB, running the official version of the sheet etc.)
  6. Perform the comparison in the automation sheet (standard Excel functionality) and store the results in a named range
  7. Decide based off the comparison whether or not that test should be treated as a pass or a failure

Once all of the combinations have been run, then the Excel addin can be used to upload all of this data to Conical and the differences can be analysed / used for audit trail purposes.

We will be uploading a sample of this use-case shortly. If this is your use-case and you need assistance prior to that, please contact us – services@conical.cloud.

My application is python based, can I use the tool?

Yes.

We’ve put together a small access component for uploading data to the tool. This can be found on this blog post.

If you have additional requirements for ancillary data then please contact us so we can add official support for your data types.

My application is Java etc. based, can I use the tool?

Yes.

All communication with the tool is via a REST API with the API description being publicly available (Click here for the demo instance’s swagger endpoint) so any language can be used to upload / download data.

Currently we have support for ancillary test data such as .net assemblies and memory usage. If your requirements are different, then contact us and we can add your additional requirements to the list of supported components.

We do manual testing, why would we need Conical?

The general aim of Conical is to speed up / improve release processes and there are multiple ways that Conical can help manual test processes.

Initially, the tool can be used as a results store / audit trail of results to provide test evidence for your users. Subsequently the testing processes can be improved to add more automation thereby reducing the [human] time taken to perform the testing.

We’ve used this approach for confirming that the manual testing has been done for certain, non-Conical related, projects where automated testing hasn’t been feasible or hasn’t been implemented yet.

Clearly, whether or not it’s worthwhile for your use-case depends on your use-case / release requirements.

We want to be able to access our test results whilst on the move, is there an app?

There is no mobile app planned. However, the actual website should be 100% mobile compatible / friendly / responsive.

If you find that there are issues, then please get in touch with us and we’ll rectify accordingly.

We’re not very technical but would like to improve our testing, can you help?

Yes. In addition to the software, we also provide a range of custom consultancy services to help your business. To enquire about these services, please contact us – services@conical.cloud.

What features are coming next?

For a description of our road map, please see here.

Note that if you have any requests for features etc. then please do let us know – suggestions@conical.cloud

Purchasing

Can we get an evaluation licence?

Yes. We would encourage everyone to try before they buy to ensure that the tool is suitable for their use-case.

If you have any questions as to whether or not the tool is suitable for your use-case, then please get in touch – contactus@conical.cloud

To get started with the evaluation licence, simply click here.

How do I purchase licences?
Full details on purchase arrrangements and pricing are available here.

Hardware requirements

What are the requirements for Conical?

In order to run Conical, you need access to a way of running a linux Docker container, a SQL server instance and optionally a drive to store the actual data on. The required size of these components is a function of both the number of users and the volume of data being uploaded.

In general, as the majority of the computationally expensive calculations (the actual comparisons) are performed as part of the CI process and not on the Conical server, then the required specs are actually quite low and it should be treated as a standard web site for performance planning.

The DB itself can either be a locally hosted SQL server instance (2016 onwards) or a cloud hosted SQL server instance. The major limiting factor for write performance tends to be the underlying DB, however, even this isn’t required to be particularly powerful so we would start small and expand as necessary rather than overprovisioning at the beginning.

FWIW For the demo instance, we’re using an Azure B2 machine with a basic level DB on the same network and the performance is more than adequate. If using a remote DB and a locally hosted server then each query will be sensitive to the latency between the 2 machines, but this isn’t usually a problem.

Generation

How do I generate the test results?

The short answer is that’s entirely up to you as every product is different and what works for one may not work for another.

In general, you will need to either create a new console app to drive the process of sourcing the numbers for comparison (candidate vs. expected) or to add an additional feature to your existing app to push to the tool. Note that for integration with CI/CD processes, this should be driveable through the command line.

However, given that there are obviously overlaps between various use-cases, we provide a series of utilities (currently .net only) to simplify the process of performing comparisons. These are:

  • Object comparison – Nuget
  • Object flattener – Nuget

Note that if you’re comparing the results of webservices (regardless of what language the original was written in), then the above will work very well for you.

How do I specify tolerances?

There are 2 places where tolerances can be used:

  1. The generation process
  2. Viewing results

As the generation process is custom per client, there’s no one standard answer here. However, if you’re using the provided comparison libraries, then they have a built in tolerance functionality which can be used to allow for tolerances.

Depending on the nature of the results payload, there are various options for performing transformations (XSLT for XML) / filters (CSV / TSV data) on the data being displayed. Tolerances can be specified here to allow better analysis.

We have some sample code on GitHub which demonstrates how to compare some arbitrarily chosen data structures and then upload them to an instance. The code can be found here.

How do I store information which isn’t covered by an existing component type?

In the short term, generate a file containing the appropriate information and add it to the test run as an additional file. This is typically done with a user chosen ‘by convention’ file name so that it’s easy for downstream consumers to subsequently process if necessary.

In addition, contact us with your requirements and we’ll see what we can do.

How do I upload data to the tool?

From a technical perspective, data is uploaded to the tool via a REST api or via language specific access libraries (see below for more details).

In general. there are 2 approaches which can be taken:

  1. Updating your existing applications to have a button to upload the results of your analysis (Excel is also supported).
  2. Creating a custom app (usually console app to allow being driven through CI) to automate the comparisons and then publish to the tool for later manual analysis of any actual differences if necessary.

Which approach is most suitable for you depends heavily on your existing processes.

How do I integrate the tool into my CI/CD process?

The tool is designed to work as part of your CI/CD process. The assumption is that your tests can be run automatically from the command line and therefore the running of your regression tests can be added to existing pipelines / new pipelines.

There is more information as to various approaches to take here.

We have data in Excel, how do we do upload from Excel?

We have an Excel addin which we plan on making available for download shortly. From here, it’s possible to push data straight from Excel.

We’re doing a systems migration and have multiple data schemas, what do we do?

In this example, the comparison app would be responsible for normalising the data structures so that the differing formats can be compared. What the correct format of choice here is up to you as it’s your data. Note that this format doesn’t have to be either of the input formats, merely something which can be comparable.

Note that in these cases, we would typically upload the source files from both sources as additional files for the various test runs so that information was not lost in the case that differences were found.

Access

How do I programmatically access the data?

All of the site’s functionality is available programmatically through the REST api. This is fully documented on each instance through the ‘/swagger’ URL (Demo instance).

To access the site’s functionality programmatically, you will need to create an access token (through the profile page) and add this as a header (Authorization = ‘Bearer {token}’) to your requests.

For .net clients, there’s a nuget packge – BorsukSoftware.Conical.Client – which can be used to upload / download data etc. The source code for this library can be found on GitHub.

We are looking to create more client access libraries for different languages. If this is something that is required for your use-case, then please get in touch and we’ll increase its priority in the job queue.

Regression testing

We’re using the tool to view our regression tests, how do we simplify our update process?

When the tool is being used to view regression test results, then best practice is to upload additional information to every failing test containing the updated set of results to be used in the case that you are happy that the new version is correct. By naming these files in a consistent manner (or by having an additional file at the test run set level detailing what files should be updated for which test) then this information can be downloaded programmatically and applied to your original data source (e.g. source control) without the need for you to do this manually as an end user.

This updating process can be driven by the CI/CD system as desired to have it as automated as possible.

Note that we’re working on adding release sign-offs to the tool so that once all approvals have been made, then a web-hook can be fired which will allow the triggering, as appropriate, of the aforementioned CI job.

Terminology

What do the terms mean?
A full description of the terms can be found here.

Others

If all the comparison functionality is available outside of the website, then why do I need the website?

The use of the website is optional if, as an organisation, there’s already a good build-test-release workflow process, especially around the analysis / sign-off of differences. However, the majority of organisations do not have these and as such there isn’t always good visibility on differences for release managers / product owners to be able to review / make decisions on.

Note that it’s perfectly possible to have the data from your Conical instance feed into your existing processes etc. The basic premise that we have is ‘it’s your data, we want it to be more visible to you’. Full Documentation of what’s supported is available through looking at the swagger page (demo).

I have a suggestion for how to make the tool better, what do I do?
Please contact us – suggestions@conical.cloud