Product Roadmap

Conical is constantly under development to add new features to make sure that it is constantly providing even more value to our customers. We currently have, at least, the following features on our book of work.

However, if you have any features that you would like or any suggestions as to functionality which would help you, then please do get in touch.

Results analysis

We would like to add additional features on the UI to analyse test results, these include:

  • Ability to query XML data with XPath queries (can already be done using the XSLT functionality, but we’d like it to be simpler for ad-hoc analysis)
  • Ability to query Json data
  • Ability see the history of a test run and compare runs accordingly

Additional language support

We would like to add further support for JVM based languages by adding additional JVM specific result types (JVM memory snapshots, module lists etc.) as well as a native access layer.

User Management

We would like to extend the range of identity providers that we support to include:

  • Active Directory integration
  • Social media providers
  • Auth0

In-Tool Release Sign Off

Currently, the tool allows a user to publish data to the site and enables this data to be both viewed by decision makers and retained for subsequent validation of the sign-off process.

We would like to create the ability to be able to perform sign-off for a given release within the tool. This would involve:

  • Defining the sign-off process
    • What steps are required for sign-off
    • Who needs to sign off
  • Ability to trigger CI/CD jobs (e.g. Teamcity, Jenkins etc.) when the sign-offs are triggered etc.
  • Ability to provide rules for automatically marking failures as ‘passed after review’ if the differences are within a given tolerance.

This would be entirely data driven and customisable per project within the tool.

Note that this is intended to be a step within your existing CI/CD tooling / workflow. We’re not in the business of creating a replacement for that!

Manual Results Uploading / Play Lists

The majority of the testing that we currently perform using the tool is based off automatic processes which can generate test results without human interaction. However, there are several use-cases across multiple industries where it might not be easy to perform the tests automatically and instead, they’re significantly manual.

Currently, these manual processes could be handled by inputting the data into a temporary repository (we’re assuming Excel) and then, once all the tests have been run, then the data could be pushed to the tool for analysis / sign-off.

We would like to extend the functionality so that a user could define a test book which is going to be performed repeatedly on a per release basis. When the test list is to be executed, the user would then be able to go through the test list inside the tool and upload the results directly into the tool rather than having to go through an intermediate step.