Description

Provides the configuration for the audio input processing output. This can either include NLP processing using the (nlpModels) or directly retrieving the transcription. speechContext provides the ability to further improve the transcription accuracy given an assumed context.

Example

var options = VoiceML.ListeningOptions.create();
var nlpKeywordModel = VoiceML.NlpKeywordModelOptions.create();
nlpKeywordModel.addKeywordGroup("fruit", ["orange", "apple"]);
nlpKeywordModel.addKeywordGroup("vegetable", ["carrot", "tomato"]);

var nlpIntentsModel = VoiceML.NlpIntentsModelOptions.create("VOICE_ENABLED_UI");
nlpIntentsModel.possibleIntents = [];

var nlpIntentsModel2 = VoiceML.NlpIntentsModelOptions.create("VOICE_ENABLED_UI");
nlpIntentsModel2.possibleIntents = ["next", "back"];

options.nlpModels = [ nlpIntentsModel, nlpKeywordModel, nlpIntentsModel2];

options.addSpeechContext(["orange", "apple"], 2);
options.speechRecognizer = VoiceMLModule.SpeechRecognizer.Default;
interface ListeningOptions {
    languageCode: string;
    nlpModels: BaseNlpModel[];
    postProcessingActions: PostProcessingAction[];
    shouldReturnAsrTranscription: boolean;
    shouldReturnInterimAsrTranscription: boolean;
    speechContexts: SpeechContext[];
    speechRecognizer: string;
    addSpeechContext(phrases, boost): void;
    getTypeName(): string;
    isOfType(type): boolean;
    isSame(other): boolean;
}

Hierarchy (view full)

Properties

languageCode: string

Description

The language which VoiceML should listen to.

nlpModels: BaseNlpModel[]

Description

Options for the ML model to be used.

postProcessingActions: PostProcessingAction[]

Description

An array of VoiceML.QnaAction elements. It is used to pass the context in each QnaAction to the DialogML.

shouldReturnAsrTranscription: boolean

Description

Should complete transcription returned. Such transcriptions after the user stopped speaking. This transcription is marked with isFinalTranscription=true in the OnListeningUpdate.

shouldReturnInterimAsrTranscription: boolean

Description

Should interim transcription returned. Such transcriptions are returned while the user still speaks, however they may be less accurate, and can be changed on following transcriptions. This interim results are marked with isFinalTranscription=false in the OnListeningUpdate.

speechContexts: SpeechContext[]

Description

Supports multiple speech contexts for increased transcription accuracy.

speechRecognizer: string

Description

An optional attribute to specify which speech recognizer ML model to use when transcribing. When creating a new ListeningOptions the value of this attrbute is defaulted to SPEECH_RECOGNIZER. The supported values are: SPEECH_RECOGNIZER.

Methods

  • Parameters

    • phrases: string[]
    • boost: number

    Returns void

    Description

    In cases where specific words are expected from the users, the transcription accuracy of these words can be improved, by strengthening their likelihood in context. The strength is scaled 1-10 (10 being the strongest increase) the default value is 5.

  • Returns string

    Description

    Returns the name of this object's type.

  • Parameters

    • type: string

    Returns boolean

    Description

    Returns true if the object matches or derives from the passed in type.

  • Parameters

    Returns boolean

    Description

    Returns true if this object is the same as other. Useful for checking if two references point to the same thing.

Generated using TypeDoc