Get live video from Amazon KVS - amazon-web-services

I am trying to get live video stream from the Amazon KVS to show in a dashboard board that I am building using React. I am very new to this (Amazon KVS)ecosystem and have no idea about how things work hence asking you good folks here.
I tried referring the 'amazon-kinesis-video-streams-webrtc-sdk-js' on github but it does not explain clearly as to how can I fetch the live video stream from kvs. The following is the thing that I've tried and have no clear idea as to what should I do with the endpointsByProtocol next to get the stream. I am bit on the deadline as well, hope someone can help.
var options = {
accessKeyId: response?.access_key_id,
secretAccessKey: response?.secret_access_key,
region: response?.region,
sessionToken: response?.session_token
}
const kinesisVideoClient = new AWS.KinesisVideo(options);
const {ChannelInfo} = await kinesisVideoClient.describeSignalingChannel({
ChannelName: channelName
}).promise();
const {ChannelARN, ChannelName} = ChannelInfo;
const getSignalingChannelEndpointResponse = await kinesisVideoClient.getSignalingChannelEndpoint({
ChannelARN: ChannelARN,
SingleMasterChannelEndpointConfiguration: {
Protocols: ['WSS', 'HTTPS'],
Role: Role.VIEWER,
},
})
.promise();
const endpointsByProtocol = getSignalingChannelEndpointResponse.ResourceEndpointList.reduce((endpoints, endpoint) => {
endpoints[endpoint.Protocol] = endpoint.ResourceEndpoint;
return endpoints;
}, {});
P.S: Apologies if this is something very basic and also thanks.
I tried referring the official doc of the AWS as well but it was not clear.

Related

Amazon Textract: How to select 'Raw text' option

We are trying integrate amazon Textract api in our node.js application. we are facing some issue, FeatureType parameter while processing image. we need to achieve the below option via api:
We are not finding the option in the AWS JavaScript SDK.
export type FeatureType = "TABLES"|"FORMS"|string;
I'm trying this code:
const params = {
Document: {
/* required */
Bytes: Buffer.from(fileData)
},
FeatureTypes: [""] // here i'm facing issue, if i pass "TABLES"|"FORMS" it working
};
var textract = new AWS.Textract({
region: awsConfig.awsRegion,
accessKeyId: awsConfig.awsAccesskeyID,
secretAccessKey: awsConfig.awsSecretAccessKey
})
textract.analyzeDocument(params, (err, data) => {
console.log(err, data)
if (err) {
return resolve(err)
} else {
resolve(data)
}
})
Getting this error:
InvalidParameterType: Expected params.FeatureTypes[0] to be a string
If I pass "TABLES"|"FORMS" its working but I need Raw Text option.
Thanks in advance
You have been calling the analyzeDocument() function:
Analyzes an input document for relationships between detected items.
It returns various types of text:
'BlockType': 'KEY_VALUE_SET'|'PAGE'|'LINE'|'WORD'|'TABLE'|'CELL'|'SELECTION_ELEMENT',
The LINE and WORD blocks seem to match your requirements.
Alternatively, there is also a detectDocumentText() function:
Detects text in the input document. Amazon Textract can detect lines of text and the words that make up a line of text.

s3.putObject(params).promise() does not upload file, but successfully executes then() callback

I had pretty long number of attempts to put a file in S3 bucket, after which I have to update my model.
I have following code (note that I have tried commented lines too. It works neither with comments nor without it.)
The problem observed:
Everything in the first .then() block (successCallBack()) gets successfully executed, but I do not see result of s3.putObject().
The bucket in question is public, no access restrictions. It used to work with sls offline option, then because of it not working in AWS I had to make lot of changes and managed to make successCallback() work which does the database work successfully. However, file upload still doesn't work.
Some questions:
While solving this, the real questions I am pondering / searching are,
Is lambda supposed to return something? I saw AWS docs but they have fragmented code snippets.
Putting await in front of s3.putObject(params).promise() does not help. I see samples with and without await in front of things that have AWS Promise() function call. Not sure which ones are correct.
What is the correct way when you have chained async functions to accomplish within one lambda function?
UPDATE:
var myJSON = {}
const createBook = async (event) => {
let bucketPath = "https://com.xxxx.yyyy.aa-bb-zzzzzz-1.amazonaws.com"
let fileKey = //file key
let path = bucketPath + "/" + fileKey;
myJSON = {
//JSON from headers
}
var s3 = new AWS.S3();
let buffer = Buffer.from(event.body, 'utf8');
var params = {Bucket: 'com.xxxx.yyyy', Key: fileKey, Body: buffer, ContentEncoding: 'utf8'};
let putObjPromise = s3.putObject(params).promise();
putObjPromise
.then(successCallBack())
.then(c => {
console.log('File upload Success!');
return {
statusCode: 200,
headers: { 'Content-Type': 'text/plain' },
body: "Success!!!"
}
})
.catch(err => {
let str = "File upload / Creation error:" + err;
console.log(str);
return {
statusCode: err.statusCode || 500,
headers: { 'Content-Type': 'text/plain' },
body: str
}
});
}
const successCallBack = async () => {
console.log("Inside success callback - " + JSON.stringify(myJSON)) ;
const { myModel } = await connectToDatabase()
console.log("After connectToDatabase")
const book = await myModel.create(myJSON)
console.log(msg);
}
Finally, I got this to work. My code worked already in sls offline setup.
What was different on AWS endpoint?
What I observed on console was the fact that my lambda was set to run under VPC.
When I chose No VPC, it worked. I do not know if this is the best practice. There must be some security advantage obtained by functions running under VPC.
I came across this huge explanation about VPC but I could not find anything related to S3.
The code posted in the question currently runs fine on AWS endpoint.
If the lambda is running in a VPC then you would need a VPC endpoint to access a service outside the vpc. S3 would be outside the VPC. Perhaps if security is an issue then creating a VPC endpoint would solve the issue in a better way. Also, if security is an issue, then perhaps adding a policy (or using the default AmazonS3FullAccess policy) to the role that the lambda is using, then the S3 bucket wouldn't need to be public.

AWS RDSDataService query not running

I'm trying to use RDSDataService to query an Aurora Serverless database. When I'm trying to query, my lambda just times out (I've set it up to 5 minutes just to make sure it isn't a problem with that). I have 1 record in my database and when I try to query it, I get no results, and neither the error or data flows are called. I've verified executeSql is called by removing the dbClusterOrInstanceArn from my params and it throw the exception for not having it.
I have also run SHOW FULL PROCESSLIST in the query editor to see if the queries were still running and they are not. I've given the lambda both the AmazonRDSFullAccess and AmazonRDSDataFullAccess policies without any luck either. You can see by the code below, i've already tried what was recommended in issue #2376.
Not that this should matter, but this lambda is triggered by a Kinesis event trigger.
const AWS = require('aws-sdk');
exports.handler = (event, context, callback) => {
const RDS = new AWS.RDSDataService({apiVersion: '2018-08-01', region: 'us-east-1'})
for (record of event.Records) {
const payload = JSON.parse(new Buffer(record.kinesis.data, 'base64').toString('utf-8'));
const data = compileItem(payload);
const params = {
awsSecretStoreArn: 'arn:aws:secretsmanager:us-east-1:149070771508:secret:xxxxxxxxx,
dbClusterOrInstanceArn: 'arn:aws:rds:us-east-1:149070771508:cluster:xxxxxxxxx',
sqlStatements: `select * from MY_DATABASE.MY_TABLE`
// database: 'MY_DATABASE'
}
console.log('calling executeSql');
RDS.executeSql(params, (error, data) => {
if (error) {
console.log('error', error)
callback(error, null);
} else {
console.log('data', data);
callback(null, { success: true })
}
});
}
}
EDIT: We've run the command through the aws cli and it returns results.
EDIT 2: I'm able to connect to it using the mysql2 package and connecting to it through the URI, so it's defiantly an issue with either the aws-sdk or how I'm using it.
Nodejs excution is not waiting for the result that's why process exit before completing the request.
use mysql library https://www.npmjs.com/package/serverless-mysql
OR
use context.callbackWaitsForEmptyEventLoop =false
Problem was the RDS had to be crated in a VPC, in which the Lambda's were not in

s3 SignedUrl x-amz-security-token

const AWS = require('aws-sdk');
export function main (event, context, callback) {
const s3 = new AWS.S3();
const data = JSON.parse(event.body);`
const s3Params = {
Bucket: process.env.mediaFilesBucket,
Key: data.name,
ContentType: data.type,
ACL: 'public-read',
};
const uploadURL = s3.getSignedUrl('putObject', s3Params);
callback(null, {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify({ uploadURL: uploadURL }),
})
}
When I test it locally it works fine, but after deployment it x-amz-security-token, and then I get access denied response. How can I get rid of this x-amz-security-token?
I was having the same issue. Everything was working flawlessly using serverless-offline but when I deployed to Lambda I started receiving AccessDenied issues on the URL. When comparing the URLs returned between the serverless-offline and AWS deployments I noticed the only difference was the inclusion of the X-Amz-Security-Token in the URL as a query string parameter. After some digging I discovered the token being assigned was based upon the assumed role the lambda function had. All I had to do was grant the appropriate S3 policies to the role and it worked.
I just solved a very similar, probably the same issue as you have. I say probably because you dont say what deployment entails for you. I am assuming you are deploying to Lambda but you may not be, this may or may not apply but if you are using temporary credentials this will apply.
I initially used the method you use above but then was using the npm module aws-signature-v4 to see if it was different and was getting the same error you are.
You will need the token, it is needed when you have signed a request with temporary credentials. In Lambda's case the credentials are in the runtime, including the session token, which you need to pass, the same is most likely true elsewhere as well but I'm not sure I haven't used ec2 in a few years.
Buried in the docs (and sorry I cannot find the place this is stated) it is pointed out that some services require that the session_token be processed with the other canonical query params. The module I'm using was tacking it on at the end, as the sig v4 instructions seem to imply, so I modified it so the token is canonical and it works.
We've updated the live version of the aws-signature-v4 module to reflect this change and now it works nicely for signing your s3 requests.
Signing is discussed here.
I would use the module I did as I have a feeling the sdk is doing the wrong thing for some reason.
usage example (this is wrapped in a multiPart upload thus the part number and upload Id):
function createBaseUrl( bucketName, uploadId, partNumber, objectKey ) {
let url = sig4.createPresignedS3URL( objectKey, {
method: "PUT",
bucket: bucketName,
expires: 21600,
query: `partNumber=${partNumber}&uploadId=${uploadId}`
});
return url;
}
I was facing the same issue, I'm creating a signed URL using library Boto3 in python3.7
All though this is not a recommended way to solve, it worked for me.
The request methods should be POST, content-type=['multipart/form-data']
Create a client in like this.
# Do not hard code credentials
client = boto3.client(
's3',
# Hard coded strings as credentials, not recommended.
aws_access_key_id='YOUR_ACCESS_KEY',
aws_secret_access_key='YOUR_SECRET_ACCESS_KEY'
)
Return response
bucket_name = BUCKET
acl = {'acl': 'public-read-write'}
file_path = str(file_name) //file you want to upload
response = s3_client.generate_presigned_post(bucket_name,
file_path,
Fields={"Content-Type": ""},
Conditions=[acl,
{"Content-Type": ""},
["starts-with", "$success_action_status", ""],
],
ExpiresIn=3600)

Amazon Rekognition for text detection

I have images of receipts and I want to store the text in the images separately. Is it possible to detect text from images using Amazon Rekognition?
Update from November 2017:
Amazon Rekognition announces real-time face recognition, Text in Image
recognition, and improved face detection
Read the announcement here: https://aws.amazon.com/about-aws/whats-new/2017/11/amazon-rekognition-announces-real-time-face-recognition-text-in-image-recognition-and-improved-face-detection/
Proof:
No, Amazon Rekognition not provide Optical Character Recognition (OCR).
At the time of writing (March 2017), it only provides:
Object and Scene Detection
Facial Analysis
Face Comparison
Facial Recognition
There is no AWS-provided service that offers OCR. You would need to use a 3rd-party product.
Amazon doesn't provide an OCR API. You can use Google Cloud Vision API for Document Text Recognition. It costs $3.5/1000 images though. To test Google's open this link and paste the code below in the the test request body on the right.
https://cloud.google.com/vision/docs/reference/rest/v1/images/annotate
{
"requests": [
{
"image": {
"source": {
"imageUri": "JPG_PNG_GIF_or_PDF_url"
}
},
"features": [
{
"type": "DOCUMENT_TEXT_DETECTION"
}
]
}
]
}
You may get better results with Amazon Textract although it's currently only available in limited preview.
It's possible to detect text in an image using the AWS JS SDK for Rekognition but your results may vary.
/* jshint esversion: 6, node:true, devel: true, undef: true, unused: true */
// Import libs.
const AWS = require('aws-sdk');
const axios = require('axios');
// Grab AWS access keys from Environmental Variables.
const { S3_ACCESS_KEY, S3_SECRET_ACCESS_KEY, S3_REGION } = process.env;
// Configure AWS with credentials.
AWS.config.update({
accessKeyId: S3_ACCESS_KEY,
secretAccessKey: S3_SECRET_ACCESS_KEY,
region: S3_REGION
});
const rekognition = new AWS.Rekognition({
apiVersion: '2016-06-27'
});
const TEXT_IMAGE_URL = 'https://loremflickr.com/g/320/240/text';
(async url => {
// Fetch the URL.
const textDetections = await axios
.get(url, {
responseType: 'arraybuffer'
})
// Convert to base64 Buffer.
.then(response => new Buffer(response.data, 'base64'))
// Pass bytes to SDK
.then(bytes =>
rekognition
.detectText({
Image: {
Bytes: bytes
}
})
.promise()
)
.catch(error => {
console.log('[ERROR]', error);
return false;
});
if (!textDetections) return console.log('Failed to find text.');
// Output the raw response.
console.log('\n', 'Text Detected:', '\n', textDetections);
// Output to Detected Text only.
console.log('\n', 'Found Text:', '\n', textDetections.TextDetections.map(t => t.DetectedText));
})(TEXT_IMAGE_URL);
See more examples of using Rekognition with NodeJS in this answer.
public async Task<List<string>> IdentifyText(string filename)
{
// Using USWest2, not the default region
AmazonRekognitionClient rekoClient = new AmazonRekognitionClient("Access Key ID", "Secret Access Key", RegionEndpoint.USEast1);
Amazon.Rekognition.Model.Image img = new Amazon.Rekognition.Model.Image();
byte[] data = null;
using (FileStream fs = new FileStream(filename, FileMode.Open, FileAccess.Read))
{
data = new byte[fs.Length];
fs.Read(data, 0, (int)fs.Length);
}
img.Bytes = new MemoryStream(data);
DetectTextRequest dfr = new DetectTextRequest();
dfr.Image = img;
var outcome = rekoClient.DetectText(dfr);
return outcome.TextDetections.Select(x=>x.DetectedText).ToList();
}