Interaction Attendant Help
Working with Speech Recognition
In Interaction Attendant, you can set up and configure speech recognition to offer callers the ability to navigate the IVR by verbally communicating their choices instead of (or in addition to) using a keypad to enter choices numerically.
For example, if a caller wants to find the balance on a credit card account, he or she might hear one of the following:
“To hear your current balance, press 1.”
“To hear your current balance, say ‘Balance’.”
“To hear your current balance, press 1 or say ‘Balance’.”
Speech engines defined by your administrator match a caller’s spoken audio to menu operations you configure with keywords and phrases, and Interaction Attendant transfers the caller to the appropriate destination.
In some cases, callers might use different terms to refer to a department or choice. For example, “customer service”, “customer support”, and “support” may all be contained in one department. You can add multiple keywords and phrases to increase the speech engine’s ability to find an appropriate match to the caller’s request.
To get started
Enable Speech Recognition at the Profile level: Before you can use the speech recognition feature, you must first enable it at the profile level.
Configure Speech Recognition for Schedules and Menus: Next, at the schedule or menu level, optionally configure such settings as confidence levels, timeouts, and prompts to play when the speech engine does not recognize the spoken audio or finds multiple matches. By default, these settings use the configured parameters set by your administrator in Interaction Administrator.
Add keywords or phrases to Speech Recognition in Inbound Call Operations: Finally, in each inbound call operation included in the schedule, set up keywords and phrases. You can also select the language to use, and which node to navigate to if the speech engine locates a match anywhere in the schedule.