AWS Lex and Facebook - content type - amazon-web-services

I am working on AWS Lex, where I integrated it with FB Messenger successfully.
As I know so far, "responseCard" has only one "contentType"
"responseCard": {
"version": integer-value,
"contentType": "application/vnd.amazonaws.card.generic",
In FB Messenger, there are 4 different content types: audio, file, image, video
https://developers.facebook.com/docs/messenger-platform/send-api-reference/contenttypes
My question: how many are content types of "responseCard"?
What I want to achieve is bot replies gif file which will be played automatically. Example: GIPHY bot
Thanks in advance

The possible contentType value is only generic.
Based on this source http://docs.aws.amazon.com/lex/latest/dg/API_runtime_ResponseCard.html
The result in messenger will be just an static image and not GIF

Related

Byte range requests from aws S3

guys so I have a problem consistently and reliably seeking videos with HTML5. I am getting the videos from an AWS S3 Bucket using Nodejs all videos are in mp4 format. I have tried multiple things to get the video's current time to move every time (most of the time it works but occasionally it doesn't move) but to no avail.
Heres my code:
router.get("/*", (req, res, next) => {
let params = {
Bucket: "bucketName",
Key: decodeURIComponent(req.path.substring(1))
};
s3.getObject(params, function(err, data) {
if (err) {
res.status(500).send(err);
} else {
res.contentType(data.ContentType);
res.send(data.Body);
}
});
});
}
I've been doing some reading and people are saying you can use byte-range requests and request the whole video through a byte-range request. This guy seems to do it with a local file but I am at a loss about how to do it with an s3 file. See post: can't seek html5 video or audio in chrome. The other suggestion I've heard of people doing is HLS encoding but I am not sure what is the best way or how to implement them can someone point me in the right direction?
I think the best answer is probably to implement an HLS or DASH streaming solution. Here is an example of HLS with S3 and CloudFront. And here is a more comprehensive Best Practices for Streaming Media Delivery.
Right now, your app server is simply reading the entire video file from S3 and then sending the entire video file contents directly to the client in an HTTP response. While it appears to work, it might be better to avoid proxying this content, and instead serve it directly to the client from S3 (or CloudFront). One way to do that, for private content, is to send the client an S3 pre-signed URL.
I tested a simple HTML5 video web page against an S3-hosted MPEG video file and was able to view it fine on Chrome, as well as seek back and forth at will. I tested with a relatively small MPEG (15MB).
<html>
<body>
<h1>Stack Overflow 65796272 Video Sample</h1>
<p>This video and associated poster are sourced from Amazon S3 via pre-signed URL.</p>
<div id="container">
<video id='video' controls="controls" preload='none' width="600" poster="https://poster-presigned-url-here">
<source id='mp4' src="https://video-presigned-url-here" type='video/mp4' />
<p>Your user agent does not support the HTML5 video element.</p>
</video>
</div>
</body>
</html>
I pre-created the poster and video pre-signed URLs using the awscli, but you can do this using an AWS SDK and serve them dynamically to your client (or inject them into the HTML sent to the client using any standard template engine such as Express.js). You can remove the poster, if not needed. Note that pre-signed URLs are time-limited.

Playing an mp3 created by AWS api gateway in an Alexa skill

I have an AWS lambda function that is called by AWS api gateway. The function takes URLs from multiple mp3 files hosted on AWS S3 and concatenates them into a single mp3 file. When I call the api from a browser all is good (the browser opens a media player and the combined audio mp3 is played).
The URL request looks like this:
https://0xxxxxxxx.execute-api.eu-west-1.amazonaws.com/alpha/files?file=https://xxx.s3-eu-west-1.amazonaws.com/file1.mp3&file=https://xxx.s3-eu-west-1.amazonaws.com/file2.mp3&file=https://xxx.s3-eu-west-1.amazonaws.com/file3.mp3
the HTTP response is of type audio/mpeg, about 10 seconds long, and is base64 encoded.
I've tried to wrap this into SSML in my skill and it fails. From the Alexa skills kit voice and tone simulator, I get the error message "error retrieving text to speech; the input was incompatible"
In the simulator, this is what I wrote:
<speak>
<audio src='https://0xxxxxxxx.execute-api.eu-west-1.amazonaws.com/alpha/files?file=https://xxx.s3-eu-west-1.amazonaws.com/file1.mp3&file=https://xxx.s3-eu-west-1.amazonaws.com/file2.mp3&file=https://xxx.s3-eu-west-1.amazonaws.com/file3.mp3'/>
</speak>
and I used this to confirm that S3 access works in the simulator:
<speak>
<audio src='https://s3.amazonaws.com/ask-soundlibrary/human/amzn_sfx_crowd_applause_05.mp3'/>
</speak>
Any ideas what is wrong? Is the issue with the http response from my lambda skill, or does something need to be enabled in api gateway? From my api gateway logs, it seems that the skill never tries to access the gateway.
Should I be using a different approach to fetch the mp3 for playback? Note, i want to use SSML as my audio is an effect and therefore shouldn't use the audioplayer (This is an Amazon requirement).
I might help you with this. The same problem happened to me. And after researching I resolved it. The problem here is "&" in you link which you provide in SSML. The solution you provided worked because there is no "&" now in your link. Too many parameters is not the problem.
I will suggest you to replace "&" with "&"
In python -
url = 'https://0xxxxxxxx.execute-api.eu-west-1.amazonaws.com/alpha/files?file=https://xxx.s3-eu-west-1.amazonaws.com/file1.mp3&file=https://xxx.s3-eu-west-1.amazonaws.com/file2.mp3&file=https://xxx.s3-eu-west-1.amazonaws.com/file3.mp3'
url = url.replace("&","&")
<speak>
"<audio src='" + url + "'/>"
</speak>
I hope this helps you. Please let me know if doesn't work.
Ok, I've worked it out myself.
It seems that SSML audio src doesn't like too many parameters in the URL call. I now just pass 1 parameter in the URL and use my lambda function to strip out the multiple filenames frome that single parameter.
https://0xxxxxxxx.execute-api.eu-west-1.amazonaws.com/alpha/files?file=/file1.mp3file=file2.mp3file=file3.mp3

Amazon s3 Bucket Serverless Image Handler Access Denied

I deployed Serverless Image Handler to S3 bucket with using this guide. And it was successfull.
Also i tried Demo UI and it's working. But i can not get images in S3 bucket with Thumbor image requests. Like this;
https://<distName>.cloudfront.net/fit-in/500x500/image.png
For my current images when i add "fit-in/500x500" to url it gives "AccessDenied" error.
For urls produced by Serverless Image Handler Demo UI it says;
{
"status":400,
"code":"RequestTypeError",
"message":"The type of request you are making could not be processed. Please ensure that your original image is of a supported file type (jpg, png, tiff, webp) and that your image request is provided in the correct syntax. Refer to the documentation for additional guidance on forming image requests."
}
How can i make current buckets working with Serverless Image Handler by url?

How can I return hyperlinked text in a Lexresponse?

So I am building a lex chatbot and I am trying to return a response with hyperlinked text. I have the chatbot sitting on a front end but I cant seem to find a way to return responses with hyperlinks. Heres what I have so far
https://imgur.com/N6Bp2fX
https://imgur.com/zbnUsrH
Now Ive read that the responses from lex are formatted to where the chatbot is sitting. For example, in the chatbot test window on the Amazon site, returning hyperlinks is impossible, but skype automatically hyperlinks URLs. But I have mine sitting on a browser but I still cant get a hyperlinked response in the bot.
Would love if anyone could help me out! Thanks in advance!
The test console window of Lex does not support html rendering. You can instead deploy your chatbot to a channel like facebook or slack, and it will be rendered correctly.
You can use the custom markup option to send a response in the following json format to format it by your client.
{
"text": "Check out the following link",
"type":"hyperlink",
"links":[{
"linkText":"Google",
"url":"https://google.com"
}]
}
Lex can return any response that you want, but it's the responsibility of chat client to parse that response and show accordingly.
So you need to write your logic to parse hyperlink and show them.
In your case you can send response from Lex like : Please visit [link]www.google.com[\link].
Then you can write your logic to show the text in anchor tag <a> in your chat window so that it is parsed as hyperlink.
Hope it helps.

Read stream from Facebook Live Videos

I would like create a server to create subtitles for live videos on Facebook. I use Google Speech to convert sound to text. However, in order to do that, I need to read the facebook live streams.
Using Facebook Live API, with me/live_videos, I get the following response:
{
"status": "LIVE",
"stream_url": "rtmp://rtmp-api.facebook.com:80/rtmp/{id}",
"secure_stream_url": "rtmps://rtmp-api.facebook.com:443/rtmp/{id},
"embed_html": "<iframe src=\"https://www.facebook.com/video/embed?video_id={video_id}\" width=\"400\" height=\"400\" frameborder=\"0\"></iframe>",
"id": "{id}"
},
How can I read the streams from the above links?
I figure out that there is no way to get the current stream from Facebook now. Maybe they should add this feature to their API.
You can get the stream of ongoing live which you can play using any Dash Player.
To get the stream URL of Live Video, follow these steps:
Use the LIVE_ID of video (not Video_ID) to make the request.
Send a get request at this end point /LIVE_ID with fields as 'access_token' and 'dash_preview_url'.
This will return the URL of the ongoing live stream which can be played using any DASH Player.
You can refer to official documentation for more information.