I recently got the following question to our Crawler feature:
I was wondering where it [the crawler] fits in with a testing team and how much it replaces human involvement. Are the reports it generates readable to the layperson, and if not, can they be output in an Excel-friendly format or something that could be used as a deliverable?
Where does the crawler fit in and how much does it replace human involvement?
In short - the crawler generates test cases for all possible paths in your conversation model. No human interaction needed.
The longer version - the crawler solves two main challenges.
First, composing test cases and test sets. For example for testing regression. Depending on the size of your conversation model this will take a hell of time if you do it manually. The crawler will automatically create a test case for every possible path in your model. As long as your bot uses state of the art quick replies or buttons as output methods no human interactions are needed. Only in case of open ended questions it would require manual inputs.
And second, maintaining your tests. Every change in your conversation model needs to be reflected in your tests. Also these efforts can become enormous if you have to do it manually. Our crawler will identify the changes and you can easily add the delta to your tests.
Are the reports readable, can they be output in various formats that could be used as a deliverable?
The crawler creates a flow chart for the conversation model detected and the test cases for every path in it which can be exported into a test set.
If you execute the test set you’ll get a visual representation of the conversation flows tested as well as various result graphs. In addition you have the option to automatically generate a PDF test report for the results, to export failed test cases directly to Jira for example or to download all results in various formats (xlsx, csv, json, …).