The development team behind Conical has worked in creating software for the financial services sector for 20 years. During this time, they’ve worked for both big and small companies using multiple methodologies, but even though the companies may have been different, the problems around releasing were always the same.
Different organisations take different approaches. Some approaches that we’ve come across include (and we’re not necessarily recommending them):
- Very large scale manual testing of each release, each release takes months
- Release and pray – run your specific tests for the feature you’ve done and then hope it doesn’t break anything else
- Different builds for every user
Each of these approaches (save for #2) were designed to minimise the risks of a change being made which had inadvertent consequences upon existing users, however, they came at a large cost in terms of time to market for features and general supportability. #2 was an interesting approach which may work for social networks, but is not recommended for many use-cases.
As part of trying to strike a balance between the concerns of not releasing breaking changes and being able to actually release new functionality, it was necessary to come up with a way to assuage people’s concerns as to the impact of the proposed changes on the wider product environment, whether these changes were to do with code changes, i.e. new version of software, or configuration changes, e.g. what data should feed into my algorithm.
Given that frequently, changes would be expected, with the question no longer being ‘prove to me that nothing changes’ but instead moving to ‘show me the impact of the change so that I can make a judgement call’. With this in mind, it was possible to create custom tools to drive the existing system with both the existing code / configuration and with the candidate code / configuration across multiple use-cases (so for a financial services use case, this could be different risk jobs for a range of supported portfolios) and then store the results, allowing them to be subsequently viewed / summarised for ease of analysis.
Having created similar workflows in multiple organisations, the development team decided that it’d be worthwhile creating a tool which would make it easier for more organisations to be able to release their code faster.
Given that every organisation’s product is different, it’s not possible to have a single tool which will test your software for you. Instead, a more flexible approach was taken so that there were a set of components which organisations could pick and choose from to help test their software. Most organisations will use the website functionality and some of the comparison libraries, but the latter isn’t required if organisations already have suitable tools to generate their comparisons. Note that it’s also possible to use the comparison functionality without the website – at which point they can simply be downloaded from our nuget page and used.
To give an example, where the product being tested was based off the responses from a webservice, there might be:
- The code to call the webservice – custom.
- The code to group the results into the form to be compared (e.g. grouping the results by underlying trade) – custom.
- The code to compare sets of numbers, strings etc. – generic
- The code to represent the comparisons in a way which makes sense for your product – custom
- Storing the data – generic
- Making it available / visible on the web – generic
Other workflows, e.g. based off Excel as a source of calculations would have a different step for #1, but the rest of the steps would look rather similar.
The tool is now available for general consumption. Click here to get started.