I am developing a swift iOS application that is calling a custom Alexa Skill that I created in Python. The Alexa skill has access to event and context in its lambda_handler(event, context) function. What do I need to do to get access to the event and context from within the iOS application. I need to use AVS as natural speech recognizer also, and when Alexa responds to the user request, I need to be able to evaluate the user request and take a different action based on the user request. I was thinking that in the lambda function I can save the user requests to DynamoDB database and then access them from within the iOS application, but I was wondering if there is a more elegant approach. I also tried to send back the event json object as a sessionAttributes in the response object, but I didn't see the sessionAttributes in the AVS response that the iOS application is getting. Is there a "speech to text" directive/option?