Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
443 views
in Technique[技术] by (71.8m points)

microsoft cognitive - Display Text for QnAMaker follow-on prompts

I'm attempting to use follow-on prompts within QnAMaker but am confused about the purpose of the field labelled "Display text" in the "Follow-up prompt" creation dialogue. https://docs.microsoft.com/en-us/azure/cognitive-services/qnamaker/how-to/multiturn-conversation describes this field as "The custom text to display in the follow-up prompt.". To me, that suggests that it's just a label for the follow-up prompt which is typically rendered as a button. I therefore assumed that the text had no purpose other than as a label and that the button would be directly linked to the chosen question / answer pair. However, from experimenting with a QnAMaker knowledge base, it seems that the "Display text" is actually passed to the QnAMaker service and this text is used to search for the answer. This means that the "Display text" value has to be chosen for the purpose of both labelling the button and successfully finding the follow-on answer.

This means I can't use short follow-on prompts such as "How do I pay for it?" or "How do I join it?" where the main Q/A pair relates to one of various services as these strings won't reliably return the intended answer. Rather, the prompts will have to be the more verbose "How do I pay for service A" and "How do I join service A".

Have I understood this correctly? I don't think the documentation makes this at all clear...

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Answer

0 votes
by (71.8m points)

Multi-turn QnA Maker conversations are still in preview and there is currently no SDK to help you build a bot that knows how to interact with the follow-up prompt API. You are ultimately in control, and so you get to have your bot treat the display text however it wants. All the "display text" is is a value that you've inserted into an answer in your knowledge base so that it gets returned along with the answer after a call to generateAnswer.

It can be very helpful to have your display text match the text of the question you're linking to because then the prompt's display text can be used to access the correct follow-up QnA pair, so long as the context is included in the API call. That's what happens in this sample. It sounds like you want to get it to work without having the prompt's display text match the text of the follow-up question. That can get tricky, but here's something you can do.

Remember that you specify more than just display text when you make follow-up prompts. You also link to a specific QnA pair. This allows the API to return that QnA ID to you along with the display text. You haven't mentioned which channel your bot is targeting, but if you're using a channel that supports postBack or messageBack actions then you can pass the QnA ID to your bot invisibly and then your bot can use that to access the answer. If you go this route, you may not even need to worry about dialogs or state. You also haven't mentioned what language you're coding your bot in, but here's an example of how this might be implemented in Node.js:

async testQnAMaker(turnContext) {
    var qna = new QnAMaker({
        knowledgeBaseId: '<GUID>',
        endpointKey: '<GUID>',
        host: 'https://<APPNAME>.azurewebsites.net/qnamaker'
    });

    var value = turnContext.activity.value;
    var qnaId = value && value.qnaId;
    // qnaId will be undefined if value is empty
    var results = await qna.getAnswers(turnContext, { qnaId });
    var firstResult = results[0];

    if (firstResult) {
        var answer = firstResult.answer;
        var resultContext = firstResult.context;
        var prompts = resultContext && resultContext.prompts;

        if (prompts && prompts.length) {
            var card = CardFactory.heroCard(
                answer,
                [],
                prompts.map(prompt => ({
                    type: 'messageBack',
                    title: prompt.displayText,
                    displayText: prompt.displayText,
                    text: prompt.displayText,
                    value: { qnaId: prompt.qnaId }
                }))
            );

            answer = MessageFactory.attachment(card);
        }

        await turnContext.sendActivity(answer);
    } else {
        await turnContext.sendActivity("I can't answer that");
    }
}

Note that this does have some limitations. Because it works by retreiving the QnA ID from the activity's value property, it may not be able to find the correct QnA pair if the user types in the text of the button manually instead of clicking the button.

If you want to make the display text work on its own without relying on the QnA ID, you could save your own mappings so that your bot knows which display text values correspond to each QnA ID in each context. However, you might also consider just adding the display text as an alternative phrasing of the question in the QnA pair. So "How do I pay for service A" and "How do I pay for service B" could both have "How do I pay for it" as a form of the question. Because you'll now have duplicated phrasings in multiple QnA pairs, you'll need to pass the context in your calls to generateAnswer for this to work.

See this answer for more info about multi-turn conversations.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome to OStack Knowledge Sharing Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...