I am creating an iOS app and want to access a Alexa Skill from inside the app. Here is what I have so far 1) Login with amazon - got the auth token 2) Record spoken audio and send to Amazon via Alexa Voice Service 3) Access my skill, infer the request and respond 4) Play the audio block response recd through Alexa Voice Service
All looks good till this point.
Now I want to expose the same functionality through text also. In scenarios where the user is unable to speak/hear, I want to provide them an UI inside the iOS app where they can type the question and it can go to my skill and get back a text response. Or take a scenario where I want to play the audio response, but also show the response in text in order to facilitate the user who may mis-hear something? Even better, if I can get the Alexa cards to show inside my app?