DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Related

  • Mocking and Its Importance in Integration and E2E Testing
  • Doubly Linked List in Data Structures and Algorithms
  • Linked List in Data Structures and Algorithms
  • What Is API-First?

Trending

  • Streamlining Event Data in Event-Driven Ansible
  • Mastering Fluent Bit: Installing and Configuring Fluent Bit on Kubernetes (Part 3)
  • Apache Doris vs Elasticsearch: An In-Depth Comparative Analysis
  • The Modern Data Stack Is Overrated — Here’s What Works
  1. DZone
  2. Data Engineering
  3. Data
  4. Integrate Alexa With Voiceflow

Integrate Alexa With Voiceflow

How to integrate Alexa and Voiceflow to put a focus on the conversation design instead of coding. This will help you to leverage better conversations!

By 
Xavier Portilla Edo user avatar
Xavier Portilla Edo
DZone Core CORE ·
Nov. 22, 23 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
3.0K Views

Join the DZone community and get the full member experience.

Join For Free

Alexa has a lot of capabilities, but it is not easy to create a complex conversation. Voiceflow is a tool that allows you to create complex conversations with Alexa without writing code. This integration allows you to create a conversation in Voiceflow and then deploy it to Alexa.

Because of that, In this repository, you will find a simple example of how to integrate Alexa with Voiceflow using the Alexa Skills Kit SDK for Node.js and calling Voiceflow’s Dialog Manager API.

Pre-requisites

  1. You need to have an account on Voiceflow
  2. You need to have an account on Alexa Developer
  3. Node.js and npm/yarn installed on your computer

Voiceflow Project

On Voiceflow, you will need to create a project and create a conversation. You can follow the Voiceflow Quick Start to create a simple conversation. On Voiceflow the only thing that you have to care about is to design the conversation.

In this example, we are going to create a simple conversation that asks the user for information about Pokemon. The conversation will be like this:

Voiceflow Project



NLU

Voiceflow has a built-in NLU, since we are going to call Voiceflow using the Dialog Manager API, we will need to design our NLU on Voiceflow and Alexa.

Following the example, we are going to create an intent called info_intent and a slot called pokemon that will be filled with the name of the Pokemon that the user wants to know about:

NLU


Dialog Manager API

The Dialog Manager API is a REST API that allows you to interact with Voiceflow. You can find the documentation here.

The DM API automatically creates and manages the conversation state. Identical requests to the DM API may produce different responses depending on your diagram’s logic and the previous request that the API received.

The DM API endpoint is: https://general-runtime.voiceflow.com/state/user/%7BuserID%7D/interact

Different types of requests can be sent. To see a list of all request types, check out the documentation for the action field below.

To start a conversation, you should send a launch request. Then, to pass in your user’s response, you should send a text request. If you have your own NLU matching, then you may want to directly send an intent request.

Here you have an example of a request:

JSON
 
curl --request POST \
     --url 'https://general-runtime.voiceflow.com/state/user/{userID}/interact?logs=off' \
     --header 'accept: application/json' \
     --header 'content-type: application/json' \
     --header 'Authorization: VF.DM.96ds3423ds9423fs87492fds79792gf343' \
     --data '
{
  "action": {
    "type": "launch"
  },
  "config": {
    "tts": false,
    "stripSSML": true,
    "stopAll": true,
    "excludeTypes": [
      "block",
      "debug",
      "flow"
    ]
  }
}
'


As you can see, you need to pass the userID and the Authorization header. This userID is the user ID that you want to interact with. The Authorization header is the API key that you can find in the Voiceflow project settings.

You can find the Voiceflow project that I used for this example in voiceflow/project.vf.

Alexa Skill

To create an Alexa Skill you need to go to Alexa Developer and create a new skill. Follow the Alexa Developer Console Quick Start to create a simple skill.

NLU

We will need to replicate the Voiceflow NLU (intents and entities) in our Alexa Skill.

As you can see, we are using the SearchQuery type. This type is used to get the user input and send it directly to Voiceflow. You can find more information about this type here.

Lambda Code

The Alexa Skill Code is going to be generic, which means that this Alexa Skill Code can be used with any Voiceflow project. To do that, we are going to implement a Lambda function that will call the Voiceflow Dialog Manager API. We are going to use the Alexa Skills Kit SDK for Node.js and Axios to call the API.

We will need to touch only 2 handlers, the LaunchRequestHandler and the ListenerIntentHandler. The LaunchRequestHandler will be used to start the conversation and  ListenerIntentHandler will be used to send the user input to Voiceflow.

Let’s start with the LaunchRequestHandler:

JavaScript
 
const LaunchRequestHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'LaunchRequest';
    },
    async handle(handlerInput) {

        let chatID = Alexa.getUserId(handlerInput.requestEnvelope).substring(0, 8);
        const messages = await utils.interact(chatID, {type: "launch"});

        return handlerInput.responseBuilder
            .speak(messages.join(" "))
            .reprompt(messages.join(" "))
            .getResponse();
    }
};


This Handler is called when the skill is launched. We are going to get the user ID and call the Voiceflow Dialog Manager API with the launch action. Then, we are going to return the response.

The following interactions are going to be handled by the ListenerIntentHandler:

JavaScript
 
const ListenerIntentHandler = {
    canHandle(handlerInput) {
        return Alexa.getRequestType(handlerInput.requestEnvelope) === 'IntentRequest'
    },
    async handle(handlerInput) {

        let chatID = Alexa.getUserId(handlerInput.requestEnvelope).substring(0, 8);
        const intent = Alexa.getIntentName(handlerInput.requestEnvelope);
        const entitiesDetected = utils.alexaDetectedEntities(handlerInput.requestEnvelope);

        const request = { 
            type: "intent", 
            payload: { 
                intent: {
                    name: intent
                },
                entities: entitiesDetected
            }
        };

        const messages = await utils.interact(chatID, request);

        return handlerInput.responseBuilder
            .speak(messages.join(" "))
            .reprompt(messages.join(" "))
            .getResponse();
    }
};


This Handler is called when the user says something. We are going to get the user input and call the Voiceflow Dialog Manager API with the intent action. Since the NLU Inference is done by Alexa, we need to get the detected entities and the detected intents and send them to Voiceflow. Then, we are going to return the response.

To get the detected entities, we are going to use the following function:

JavaScript
 
module.exports.alexaDetectedEntities = function alexaDetectedEntities(alexaRequest) {
    let entities = [];
    const entitiesDetected = alexaRequest.request.intent.slots;
    for ( const entity of Object.values(entitiesDetected)) {
        entities.push({
            name: entity.name,
            value: entity.value
        });
    }
    return entities;
}


You can find the code of this function in lambda/utils.js.

Finally, we have to make sure that we add the handlers to the skill:

JavaScript
 
exports.handler = Alexa.SkillBuilders.custom()
    .addRequestHandlers(
        LaunchRequestHandler,
        ListenerIntentHandler,
        HelpIntentHandler,
        CancelAndStopIntentHandler,
        FallbackIntentHandler,
        SessionEndedRequestHandler,
        IntentReflectorHandler)
    .addErrorHandlers(
        ErrorHandler)
    .withCustomUserAgent('sample/hello-world/v1.2')


In the handlers above you can see that we are using a function called utils.interact. This function is going to call the Voiceflow Dialog Manager API. You can find the code of this function in lambda/utils.js:

JavaScript
 
const axios = require('axios');

const VF_API_KEY = "VF.DM.96ds3423ds9423fs87492fds79792gf343";

module.exports.interact = async function interact(chatID, request) {
    let messages = [];
    console.log(`request: `+JSON.stringify(request));

    const response = await axios({
        method: "POST",
        url: `https://general-runtime.voiceflow.com/state/user/${chatID}/interact`,
        headers: {
            Authorization: VF_API_KEY
        },
        data: {
            request
        }

    });

    for (const trace of response.data) {
        switch (trace.type) {
            case "text":
            case "speak":
                {
                    // remove break lines
                    messages.push(this.filter(trace.payload.message));
                    break;
                }
            case "end":
                {
                    messages.push("Bye!");
                    break;
                }
        }
    }

    console.log(`response: `+messages.join(","));
    return messages;
};


This function is going to return an array of messages. We are going to use this array to build the response. We have also added some code to remove the break lines and weird characters:

JavaScript
 
module.exports.filter = function filter(string) {
    string = string.replace(/\'/g, '\'')
    string = string.replace(/(<([^>]+)>)/ig, "")
    string = string.replace(/\&/g, ' and ')
    string = string.replace(/[&\\#,+()$~%*?<>{}]/g, '')
    string = string.replace(/\s+/g, ' ').trim()
    string = string.replace(/ +(?= )/g,'')

    return string;
};


With this code, we have finished the Alexa Skill. You can find the code of the Lambda function in lambda/index.js.

Testing

Once you have created the Alexa Skill and the Voiceflow project, you can test it. To test it, you can use the Alexa Simulator or you can use a real device.

Following the example we were using, you can test the Alexa Skill with the following sentences to request information about Pokemon:

Conclusion

As you can see, it is straightforward to integrate Alexa with Voiceflow. You can create complex conversations with Voiceflow and then deploy them to Alexa. So your focus will be on the conversation and not on the code!

I hope you have enjoyed this tutorial.

You can find the code of this tutorial here.

Happy coding!

API Data structure Dialog manager Data Types Integration Testing

Published at DZone with permission of Xavier Portilla Edo, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Mocking and Its Importance in Integration and E2E Testing
  • Doubly Linked List in Data Structures and Algorithms
  • Linked List in Data Structures and Algorithms
  • What Is API-First?

Partner Resources

×

Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • [email protected]

Let's be friends: