In Syncfusion’s most recent webinar—”Let’s See What It Takes to Build an AI Chatbot!”—team lead Vimal Prabhu Manohkaran took attendees through the basic anatomy of a bot framework, how it functions, and how to incorporate AI into it. The webinar offered a step-by-step tutorial to configure and set up a simple chatbot using Azure and Microsoft’s Bot Framework. If you missed the webinar, or would like to watch it again, check out our YouTube page, or watch it here:
The following is the Q&A portion of the webinar.
Are there any stand-alone editors for this?
There are no stand-alone editors for bot development. So you can use any text editor for it. However, I would personally recommend the VS IDE, since it has debugging capabilities. You can edit and deploy in the Azure portal, also. Go to Build->Open online editor.
Are Alexa, Google Assistant, Siri, etc., examples of chatbots?
Alexa is not a chatbot. It’s an agent that triggers bot services that get things done for you in the background. The others are the same.
Deserializing entities or entities extraction?
1. Go to this sample link.
2. Open With LUIS folder.
3. Open the Appointment.cs file.
4. Go to the Convert method in that class.
5. As you can see, the result parameter provides the JSON string from the bot server. As I mentioned in the webinar, all the bot conversations happen in the form of long strings in JSON format. So, in order to use that string, we are using a JSON deserializer to break down the entities and intents from the bot server.
6. Make sure to maintain a separate array for each of your intents in the required data type and in the same name of the intent specified in the LUIS app.
In my example, I have used two intents dateTime, a title of DateTime, and string types. So I have maintained two arrays, one of type DateTime and one of type string.
What inbuilt API support does Microsoft Bot Framework contain?
Microsoft provides a wide range of APIs: Face API, Recommendations API, Text Analytics API, Emotion API, Academic Knowledge API, Web Language Model API, Computer Vision API, Bing Search API, Content Moderator, Translator, etc. You can learn about them more in detail on their website.
How do you integrate voice commands instead of text?
Use the Speech SDK of Microsoft Bot Framework and make use of the SSNL parameter when sending prompts to the user. The bots read the text from the SSNL parameter for accessibility-enabled devices. You can get started on this in this Microsoft documentation.
Can I download your sample solutions anywhere?
Samples will be uploaded to Syncfusion’s GitHub channel and a link will be shared with you along with the video recording of the webinar.
If you liked this post, we think you’ll also enjoy: