Hi all, I'm developing on Ubuntu with the c++ sdk.
Is there any way to capture the answers coming from Alexa Sample App, handle them and, for example, send commands to a robot?
I'm trying to locate which part of the Sample App is involved in the handling of the answers coming from the AWS but i still have to find it.
To give you more details: starting from a scenario where I've uploaded my lambda function (which include ROS code to publish some ROS messages) on the AWS and where I've implemented my own skill (which has to be used to move the robot) that I've already connected to the uploaded lambda function, I've created a new alexa device on my embedded board and on this board i run the SampleApp. At this point , using the SampleApp, Alexa can answer correctly to my skill and along to this answer, the lambda function should return also ROS messages. Now, looking at what is wrote on the net, there isn't a way to "catch" directly the answer of AVS to my skill using the c++ sdk, cause the SampleApp doesn't have an event where i can write code to capture the result of the AVS elaboration in such a way that based on the answer i can handle what the robot can do (so i don't have to upload in the lambda function also ROS code), but i have to use another known method to make visible the embedded board on the net (basically is to make the embedded board visible with a static public IP, set this IP within the uploaded lambda function to make the connection between the AWS and the ROS websocket bridge which will catch the incoming ROS messages on the embedded board).
Could anyone confirm what I wrote? is it really impossible to use the SampleApp to catch the answers from the AVS and handle them?
Thank you so much