Skip to content

Latest commit

 

History

History
 
 

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

How To Setup Testing and Automation for Alexa Skills

Overview

There are several methods commonly used to test and simulate Alexa skills during the development process. See the Alexa Cookbook testing guide for more details.

For running formal QA tests, developers can leverage third-party tools that run on standard unit test frameworks like Jest or Mocha.

Here we will focus on running tests using the Bespoken CLI (bst) from Bespoken.

There are four key aspects to the testing provided:

  • Unit-Tests - For ensuring code is working correctly
  • End-to-end Tests - For ensuring the complete skill behavior, including the interaction model, is working correctly
  • Continuous Integration - Via Travis CI - automatically runs tests whenever changes are made to the code
  • Code Coverage - Automatically generated by the unit tests and Travis - shows how well unit-tested the code is, as well as progress over time

Setup

To get started with testing, install the Bespoken CLI:

npm install bespoken-tools -g

Running Unit Tests

Unit tests can be found under the /lambda/custom/test/unit folder.

Key configuration settings for the tests are found in the testing.json. For full details on how the testing.json works, read here.

To run the tests, simply open a command-line terminal, go to the /lambda/custom directory and type:

bst test test/unit

The last argument tells our test runner to only run the tests under the test/unit directory.

You should see output like this:

 PASS  test\unit\CommonHandlers.test.yml
  en-US
    Get help for various intents
      √ AMAZON.HelpIntent
      √ SessionEndedRequest
    Get region list
      √ RegionListIntent

------------------------|----------|----------|----------|----------|-------------------|
File                    |  % Stmts | % Branch |  % Funcs |  % Lines | Uncovered Line #s |
------------------------|----------|----------|----------|----------|-------------------|
All files               |    65.15 |    58.57 |    75.68 |    65.15 |                   |
 basicSearchHandlers.js |    70.44 |    62.39 |    85.71 |    70.44 |... 97,498,514,531 |
 resultsHandlers.js     |    66.82 |    58.39 |    28.57 |    66.82 |... 98,599,601,604 |
------------------------|----------|----------|----------|----------|-------------------|
Test Suites: 4 passed, 4 total
Tests:       13 passed, 13 total
Snapshots:   0 total
Time:        19.668s, estimated 22s

The results show:

  • What happened with each of the tests
  • Summarize the results as a whole
  • Provide abbreviated code coverage information

To take a look at the code coverage for the tests in more depth, go to the directory <PROJECT_ROOT>/test_output/coverage/lcov-report/index.html. The code coverage shows you which code is being executed by the tests, and which parts are not. For more information on how to understand this output, read here.

You can also see a pretty-printed HTML report of the test results here: <PROJECT_ROOT>/test_output/report/index.html.

Runing End-to-end Tests

End-to-end tests will interact with your actual Alexa skill using our simple testing scripts. Here is an example test:

- test: Invoke skill and search by major
- open college finder: welcome to college finder
- search by major: what major would you like to search for
- biology: i found * schools for biological and biomedical sciences you can refine your search or hear the first 12 results which would you like
- i want to hear my results: here are your search results
- stop

For this test, the utterances on the left-hand side will be turned into speech and sent to Alexa and your skill. The skill will respond, which will in turn be converted back into text, and compared to the expected respones on the right-hand side. It's easy to create full-cycle tests using this approach, which test all aspects of the system, including the interaction model (i.e., ensuring that utterances match up correctly to the intent), Display output, AudioPlayer behavior, etc.

To use the end-to-end testing, it is necessary to do the following:

  • Deploy your skill (code and interaction model) via the ASK CLI
  • Enable it for testing via the Alexa app
  • Setup a Virtual Device Token with Bespoken

The virtual device token should then be included in the testing.json file. It is what allows us to interact with Alexa programmatically.

Setting Up Continuous Integration (CI)

We configured the project with Travis CI so that unit tests are run everytime code is pushed to Github. There are any number of great Continuous Integration tools out there - we use Travis here as just one example to get developers started with using CI.

The Travis configuration is stored under .travis.yml. For running unit-tests, we install the Bespoken CLI and the CodeCov CLI. Then we run the unit tests and send the data to CodeCov.

To setup Travis for your own project, go to travis-ci.org - if you have not signed up before, choose "Sign In With Github" and then you will see a list of your Github repos. Select your repo to enable it, and you should be all set!

Using CodeCov

CodeCov.io is a hosted version of the Code Coverage we explained earlier.

Using hosted code coverage provides a few benefits:

  • Easily accessible code coverage information
  • Track code coverage over time - see how your quality is improving (or degrading :-))
  • Ability to mandate code coverage levels via Github status checks

For CodeCov to work with a Github repo like this one, just go to their site and select "Signup", then select "Signup With Github".

Once signed up, you will see a list of repos - just select a repo and enable it for testing.

Wrapup

There are many aspects to testing and automation, and we have run through a few here. To learn more about it and how Bespoken approaches it, take a look at this site: https://bespoken.io/testing.

And if you need assistance, reach out to Bespoken on any of these channels: