Categories
blog

New version released

We’re pleased to announce that a new version of Conical is now available. This contains a few small technical as well as some more UX improvements.

Highlighting of flattened Json

The highlighting functionality from the results text display has been brought into the flattened results Json display. This should assist in interpreting test results.

Better 404 experience

When navigating to incorrect links, the experience is now slightly better.

Data deletion

A bug was discovered whereby data wasn’t always removed from disc when a test run or evidence set was deleted. This has been rectified.

Note that any data which was previously left dangling is unaffected by this change. We are adding a piece of functionality to make it easier to clean this data which will be available in a future version.

As usual, if you have any requests, suggestions or comments then please do get in touch.

Happy testing!

Categories
blog

New version released

We’re pleased to announce that a new version of Conical is now available. This comes with a range of UX improvements to make the tool more powerful. These changes include:

Results text / logs

When viewing these, the user can now apply filters to narrow down the range of rows which are displayed. This can be useful when a user is trying to find specific messages in the output. To further assist in this, users can choose to highlight rows which match their criteria. By using these 2 features, it should be simpler to hunt for the rows of interest.

Additionally, where the source data points are large, users can now use pagination to improve the responsiveness of the browser.

Results json

It’s now possible to flatten the results json and apply filters to the flattened data.

Improved UX for non-logged in users

Previously, when a non-logged in user clicked on a link, they were presented with an error screen. They were required to navigate to the profile page, log in and then click on the link again. To improve that experience, they can now log in directly from the error page.

Filtering

Filtering functionality has been added to audit trails and .net assembly information.

Additional files

Some tidying up has been made here to improve the consistency of experience across all usages of additional files.

As usual, if you have any requests, suggestions or comments then please do get in touch.

Happy testing!

Categories
blog

New version released

Although the blog has been quiet for the last couple of months, our keyboards have been anything but. We’ve been working with our clients to add additional features to simplify their processes and improve their ability to present testing results to their clients. These new features include.

UX improvements

A lot of small improvements have been made to the tool to improve its usability. We’re always keen on user feedback, so if there are any aspects of the tool which you think that we can improve, then please do let us know.

Lightboxes for media

As part of extending the tool to better facilitate UX testing, users can now take advantage of being able to see any additional files using a lightbox.

Product dashboards

Users can now upload multiple dashboards per product (as opposed to previously only being able to configure the front page). This functionality can be thought of as a ‘mini CMS for test results’ allowing users to create customised presentation of the data, typically a dashboard per release or CI pipeline. These dashboards can contain standard HTML alongside Conical specific widgets for accessing test data.

We are currently using them to allow our clients to present an overview of a release candidate’s testing status, thereby allowing their project owners to be able to see the status at a glance.

We currently have support for embedding searches alongside their results. Additional widgets will be added as user requirements become clearer. If you have suggestions or requirements for additional widgets which would be useful, then please do get in touch.

As usual, if you have any questions about Conical or how we can help you improve your build, test and release processes, then please do contact us – contactus@conical.cloud.

Happy testing

Categories
blog

New version released

We’ve been busy this last month helping our clients use Conical to improve their testing processes. We’ve got some cool new features coming out of this work will be announced and released shortly, but in the meantime, we’re pleased to announce a new version of Conical has been released.

This version has a few improvements, including:

  • Improvements to the UX in the admin section
  • improved user searching functionality
  • long tags fail gracefully (BadRequest) rather than a server error
  • server metrics – free space on mounted discs on linux now report the correct value

Additionally, we found a problem with product level aliases when combined with product level privileges. This has now been corrected and fixed.

If you would like us to assist you with your testing and release processes then do get in touch with us at services@conical.cloud. We can help all sizes and types of organisations and we relish a challenge!

As usual, any questions, suggestions or comments, please get in touch.

Happy testing.

Categories
blog

New version released

We’re pleased to announce a new version of Conical has been released.

This version has a few new features, the main ones being:

  • The internal storage of ‘creator’ for test run sets has been updated so that it reflects the current name of the user which uploaded the data rather than the name of the uploading user at the time that they uploaded data
  • Usernames can now contain ‘.’

Note this is the 1st version of Conical for which automated Selenium tests have been used as part of the release testing process. We will provide more details on how we use Selenium in future blog posts / updates. As part of this testing, we’ve also added a range of additional features to the book of work which’ll help other users to use Conical to help with their UI testing.

Happy testing

Categories
blog

New version released

We’re pleased to announce a new version of Conical has just been released.

Along with a few minor UI tweaks, the main feature of this release is the ability to specify adhoc XPATH queries when looking at results XML.

This simplifies the analysis of results where the user wishes to perform some quick querying on the data without needing to create a custom XSLT.

Note that this functionality doesn’t replace the more powerful XSLT transformation feature but should be seen as a quick investigation tool with the XSLT feature then being used when the requirements are more well known (remember that the XSLT functionality allows for parameterisation).

To use the feature, simply click on the search icon in the results XML tab and follow the prompts.

As usual, if you have any comments, feedback or suggestions, then please do get in touch.

Happy testing

Categories
blog

New version released

We’re pleased to announce that we’ve just released a new version of Conical. The major features are:

  • Ability to search both test run sets and evidence sets by tags through the UI
  • Ability to see the history of a test run
  • Improvements to the UI to stop reloading data

As usual, any comments, questions or feedback, then please do get in touch.

Happy testing

Categories
blog

Uploading unit test results

Although Conical has never been intended to replace existing unit test / CI workflow tools, it’s a fairly common for teams to have a series of what are actually integration or regression tests structured as unit tests (if only because it’s rather easy to do).

Obviously, in these circumstances, we would tend to advocate for having a more appropriate, specialised piece of software to handle the different requirements of these. However, we acknowledge that in a lot of circumstances, this might be overkill and as what we’re aiming to do is help you improve your testing at a reasonable cost rather than aim for a prohibitively expensive and unrealistic testing perfection, we’re pragmatic in how we can help people’s existing processes.

To that end, we’ve release a new tool to Nuget BorsukSoftware.Conical.Tools.TRXUploader (Source – GitHub). Full instructions on how to use the tool are provided on the GitHub page.

With this approach, it’s possible to use your existing testing processes etc. but report your results in a nicer, more accessible fashion and then to subsequently improve the generation process if this would be beneficial to your product.

Generating TRX files

To ‘refresh your memory’, it’s very easy to generate a trx file from the command line. Navigate to the directory containing your tests’ project file and run:

dotnet test -r ../testOutput --logger "trx;logfilename=output.trx"

This will generate a trx file in the output directory.

Installing the upload tool

The tool is packaged as a .net tool so you can follow the instructions on MSDN. In short:

  1. Create a tool manifest
  2. Install or update the tool
  3. Run the tool

Note that we would always recommend updating the tool as well in order to pick up the latest version of the tool.

These instructions expand into:

# Create manifest
dotnet new tool-manifest

# Install tool
dotnet tool install BorsukSoftware.Conical.Tools.TRXUploader

# Update tool
dotnet tool update BorsukSoftware.Conical.Tools.TRXUploader

# Run tool
dotnet tool run BorsukSoftware.Conical.Tools.TRXUploader \
  -server https://demo.conical.cloud \
  -product "myProduct" \
  -source "output.trx" \
  -token "noThisIsntOurToken" \
  -tag "local" \
  -tag "example" \
  -testRunType "Unit Test"
Viewing the results

When the results are uploaded to the Conical instance, they will be mapped as one unit test run to one Conical test run, with the tests being subsequently displayed grouped by name (. are treated as hierarchy separators).

The details from the trx file (e.g. the machine details, timings etc.) are uploaded as results XML with any logging output being stored as logs.

Future steps

If you have any suggestions as to how to improve the tool / make it easier to handle your use-case etc. then do get in touch, either with the contact us below or via GitHub.

Happy testing.

Categories
blog

Evidence Sets Released

We’re pleased to announce that we’ve released a new version of Conical containing support for evidence sets. These allow users to be able to have a high level view of the state of their entire release candidate across multiple test run sets and products.

It’s taken a little bit more time than we had originally planned to “dot the ‘i’s and cross the ‘t’s”, but it’s definitely worth the wait. We updated the original implementation to remove the ability to mark tests as ‘pass after review’ (PAR) as the feedback we received was that having an immutable overview was rather useful in its own right. The PAR functionality will be coming soon within the general release approval functionality.

To make it easier to create evidence sets from the command line / CI pipeline, we’ve released a tool on nuget – link – to make it trivially easy to do so without needing to write any code.

We use this tool ourselves in our CI processes prior to release to create an evidence set representing all of the test material that is run against our final candidate docker image, i.e. we can see the whole results of all testing for that package in a single place so that we can be confident that what we’re releasing works.

To get started, simply download Conical and follow the installation instructions. And as always, if you have any requests / comments, please do get in touch with us and we’ll do our very best to help.

Happy testing

Categories
blog

Introducing Evidence Sets

[Updated to reflect change in feature scope following user feedback]

We’re pleased to announce a new feature that we’re working on – Evidence Sets. The premise here is that this allows a user to group a set of test run sets together to form a single viewable unit which can be used to provide evidence (hence the name) of testing.

Evidence sets can be used in multiple different ways, including:

  • Allowing for failing tests to be re-run if desired without having to re-run everything.
  • Grouping multiple pieces of testing together to have a single reference for test results and for end user sign-off.

Main features

The main features of evidence sets are:

  • Ability to collate multiple test run sets together, including:
    • optional prefixes to create custom hierarchies
    • subsets of test runs as desired
    • from multiple different products
  • Ability to have multiple test runs contribute to a single test (e.g. to handle re-runs). There are several options (best result, worst result, first result, last result or not allowed) for deciding the state of a test if multiple contributing test runs are specified

Dogfooding

Internally, we use the evidence set functionality to allow us to have a coherent overview of the state of the application prior to release. For each release, we want to run:

  • integration tests for the API layer
    • For a fresh install
    • For each DB upgrade path
  • integration tests for the DB update functionality
  • integration tests for the fresh install functionality

Additionally, we would like to be able to show the results of the UI testing

Example – API Integration Tests

The API integration tests are designed to check that a given instance of the API performs as expected and will cover everything from uploading results to checking the security model works as expected. Given that we want to ensure that the functionality is correct regardless of whether it’s a fresh install or an upgraded install, we want to run the same set of integration tests against as many combinations as possible. As the running of these tests is highly automated (one just needs to specify the target server and the appropriate admin user to use to start) then these are trivially easy to run and can generate a large number of result sets to analyse.

By using the evidence sets functionality, we can collate all of these result sets into a single display unit so that it’s very easy to get an overview of the state of the release candidate. We do this by using the ‘prefix’ functionality so it’s very clear where there’d be a problem, e.g.

  • api
    • clean
    • upgrades
      • v1
      • v2
      • v3
      • etc.

And then the usual test hierarchy applies underneath each node.

Note that as we wouldn’t release anything which is non-green, we don’t need to leverage the sign-off functionality in evidence sets.

In addition to the functionality above, we then add the installer / upgrade test results to the same evidence set (under appropriate prefixes) so we can demonstrate to the people signing off the release that everything is good.

Summary

We’re putting the final touches to the functionality and we’re hoping to have this work complete in the next week or so and then we’ll make it available to all of our clients in the usual fashion.

In the meantime, if you have any questions, queries or suggestions then please do get in touch with us