In this video, Mr. Benoit Nachawati, Mr.German Viscuso, and Mr. Andrea Muttoni from the Amazon Alexa team talk about ways to DevOps the Alexa skills and also talks about why testing and automation matters.
It is very important to identify and fix bugs while developing Alexa skills before the end-users report the defects. Here, the speakers talk about unit testing and end-to-end testing best practices for Alexa skills.
In addition, this video would also provide information on how the deployment workflow can be automated right from committing your source code to skill deployment. The process of setting up proactive alarms will also be covered in this session.
At 3:02, the speaker begins to state the reasons why the Alexa skill must be tested. Testing your Alexa skill is very important to ensure a smooth and loveable customer experience. Common problems associated with Alexa skills are error handling, syntactical errors, confusing prompts or responses, dialog NLU (Natural Language Understanding) errors, external API connectivity issues, out-of-sync language models, and overly verbose prompts or responses. He adds that it is important to maintain the balance between innovation and stability.
At 7:32, the speakers brief on the important DevOps best practices. The best practices include Infrastructure as a code (IaC), application & infrastructure version management, CI/CD (Continuous Integration/Continuous deployment), building, testing, and release automation along with proper monitoring and logging.
At 9:32, the speaker briefs on different modes of testing. Manual testing ensures the basics of the Alexa skill are working while unit testing is to ensure the code works as expected. End-to-end testing is to ensure the entire Alexa skills function correctly while continuous testing is to ensure all skills work as expected 24/7.
At 11:00, the speaker mentions some basic tools for Alexa’s skill development. ‘Alexa developer console’ and ‘Alexa test simulator’ are the tools that can be used by an amateur Alexa skill developer. Experienced developers can use the ‘Alexa Skills Kit Command Line Interface (ASK CLI)’ and ‘Alexa Skills Kit Skill Management API (ASK SMAPI)’.
At 15:21, the speaker talks about evaluating and testing the basic interaction model. This involves testing utterance resolutions and offering multi-turn support. The first stage of testing is testing using the ‘Utterance conflict detection tool. This tool is for detecting the utterances which are mapped to many intents. These utterances can result in the reduction of the NLP model.
This tool is automatically run on each model before publishing the 1st version of the app. The ‘Utterance Prolifier’ tool can test the utterances as you build your interaction model. You can provide utterances and check how they get resolved to the intents and slots. If the right intent is not achieved, you can update your sample utterances before you start coding! The purpose of the ‘NLU Evaluation tool’ is to help in comparing how Alexa will interpret the inputs. This tool supports regression testing and allows developers to run evaluations after new features.
At 31:45, the speaker discusses how to test your skill’s AWS Lambda code. The first step is to unit test Alexa requests vs your lambda code. The code to perform this testing is ‘ask api invoke-skill -s -f -e ’.
The next step is to simulate NLU and lambda code in one turn. This can be done by invoking the command, ‘ask simulate -text “hello there” -l ’. The next step is to simulate a multi-turn dialog(full session). This can be performed using the command, ‘ask dialog -l ’.
At 36:41, the speaker briefs on how the code for Alexa skills can be unit tested. Unit testing plays a huge role in identifying bugs during the development phase. This mode of testing involves testing without dependencies or deployment. In addition, it also allows code coverage calculation and continuous integration.
At 41:17, the speaker talks about End-to-end testing. End-to-end testing involves testing the entire system from the front-end to external services. This type of testing will ensure that the interaction models are properly constructed. In addition, end-to-end testing allows speech recognition and AI testing.
At 50:27, the speaker briefs on Infrastructure as a code. This involves provisioning all skill artifacts in one cloud formation stack. Infrastructure as a code allows orderly and predictable provisioning, decommissioning, and updating of resources. The speaker then talks about continuous testing.
Continuous testing involves monitoring the Alexa skills on a regular basis once live and getting alerts whenever any issue arises. In this video, the speakers have covered the best practices to be followed while testing the skills, and on how the skill-testing can be validated and automated.