AWS Rekognition not recognizing image as bytes - amazon-web-services

Getting error in AWS
InvalidParameterException: Requested image should either contain bytes or s3 object.
console.log(imageBuffer) shows
Code that passes the image to AWS function
const moderateEachPic = async (image) => {
let imageBuffer;
imageBuffer = new Uint8Array(await image.arrayBuffer())
// moderate image using AWS Rekognition
console.log(imageBuffer)
await verifyImage (imageBuffer).then((res) => {
....
})
}
This is the function to call AWS Rekognition
var rekognition = new RekognitionClient({
credentials: creds,
region: region,
})
export async function verifyImage (image ) {
console.log(image)
const params = {
Image:{
image
},
Attributes: ['ALL']
};
const command = new DetectFacesCommand(params);

Image per the AWS SDK has two properties, Bytes and S3Object. You have to set those appropriately in order to have your example work.
I can't get your sample to run locally, but, try this:
var rekognition = new RekognitionClient({
credentials: creds,
region: region,
})
export async function verifyImage (image ) {
console.log(image)
const params = {
Image: {
Bytes: image,
},
},
Attributes: ['ALL']
};
const command = new DetectFacesCommand(params);

Related

How to set credentials in AWS SDK v3 JavaScript?

I am scouring the documentation, and it only provides pseudo-code of the credentials for v3 (e.g. const client = new S3Client(clientParams)
How do I initialize an S3Client with the bucket and credentials to perform a getSignedUrl request? Any resources pointing me in the right direction would be most helpful. I've even searched YouTube, SO, etc and I can't find any specific info on v3. Even the documentation and examples doesn't provide the actual code to use credentials. Thanks!
As an aside, do I have to include the fake folder structure in the filename, or can I just use the actual filename? For example: bucket/folder1/folder2/uniqueFilename.zip or uniqueFilename.zip
Here's the code I have so far: (Keep in mind I was returning the wasabiObjKey to ensure I was getting the correct file name. I am. It's the client, GetObjectCommand, and getSignedUrl that I'm having issues with.
exports.getPresignedUrl = functions.https.onCall(async (data, ctx) => {
const wasabiObjKey = `${data.bucket_prefix ? `${data.bucket_prefix}/` : ''}${data.uid.replace(/-/g, '_').toLowerCase()}${data.variation ? `_${data.variation.replace(/\./g, '').toLowerCase()}` : ''}.zip`
const { S3Client, GetObjectCommand } = require('#aws-sdk/client-s3')
const s3 = new S3Client({
bucketEndpoint: functions.config().s3_bucket.name,
region: functions.config().s3_bucket.region,
credentials: {
secretAccessKey: functions.config().s3.secret,
accessKeyId: functions.config().s3.access_key
}
})
const command = new GetObjectCommand({
Bucket: functions.config().s3_bucket.name,
Key: wasabiObjKey,
})
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner")
const url = getSignedUrl(s3, command, { expiresIn: 60 })
return wasabiObjKey
})
There are a credential chain that provide credential to your API calls from SDK
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-credentials-node.html
Loaded from AWS Identity and Access Management (IAM) roles for Amazon
EC2
Loaded from the shared credentials file (~/.aws/credentials)
Loaded from environment variables
Loaded from a JSON file on disk
Other credential-provider classes provided by the JavaScript SDK
You can embed the credential inside your source code but it's not the prefered way
new S3Client(configuration: S3ClientConfig): S3Client
Where S3ClientConfig contain a credentials property
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/modules/credentials.html
const { S3Client,GetObjectCommand } = require("#aws-sdk/client-s3");
let client = new S3Client({
region:'ap-southeast-1',
credentials:{
accessKeyId:'',
secretAccessKey:''
}
});
(async () => {
const response = await client.send(new GetObjectCommand({Bucket:"BucketNameHere",Key:"ObjectNameHere"}));
console.log(response);
})();
Sample answer
'$metadata': {
httpStatusCode: 200,
requestId: undefined,
extendedRequestId: '7kwrFkEp3lEnLU+OtxjrgdmS6gQmvPdbnqqR7I8P/rdFrUPBkdKYPYykWivuHPXCF1IHgjCIbe8=',
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
},
Here's a simple approach I use (in Deno) for testing (in case you don't want to go the signedUrl approach and just let the SDK do the heavy lifting for you):
import { config as env } from 'https://deno.land/x/dotenv/mod.ts' // https://github.com/pietvanzoen/deno-dotenv
import { S3Client, ListObjectsV2Command } from 'https://cdn.skypack.dev/#aws-sdk/client-s3' // https://github.com/aws/aws-sdk-js-v3
const {AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY} = env()
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/modules/credentials.html
const credentials = {
accessKeyId: AWS_ACCESS_KEY_ID,
secretAccessKey: AWS_SECRET_ACCESS_KEY,
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/s3clientconfig.html
const config = {
region: 'ap-southeast-1',
credentials,
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/s3client.html
const client = new S3Client(config)
export async function list() {
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/interfaces/listobjectsv2commandinput.html
const input = {
Bucket: 'BucketNameHere'
}
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/command.html
const cmd = new ListObjectsV2Command(input)
// https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-s3/classes/listobjectsv2command.html
return await client.send(cmd)
}

Using AWS Lambda Console to send push using SNS

I tried every possible solution on the internet with no hope
What I am trying to do is simply use aws lambda functions (through the aws console) to fetch user fcm token from lets say DynamoDB (not included in the question), use that token to create endpointArn, send push to that specific device
I tested to send Using SNS console and the push gets to the device successfully but I failed to get it to the device using Lambda functions although it gives success status and message ID
Here is the code I used
// Load the AWS SDK for Node.js
var AWS = require('aws-sdk');
// Set region
AWS.config.update({region: 'us-east-1'});
const sns = new AWS.SNS()
const sampleMessage = {
"GCM": {
"notification": {
"body": "Sample message for Android endpoints",
"title":"Title Test"
}
}
}
exports.handler = async (event) => {
const snsPayload = JSON.stringify(sampleMessage);
const response = {
statusCode: 200,
body: JSON.stringify('Hello from Lambda!'),
};
const params = {
PlatformApplicationArn: '<Platform Arn>',
Token: '<FCM Token>'
};
try {
const endpointData = await sns.createPlatformEndpoint(params).promise();
const paramsMessage = {
Message: snsPayload,
TargetArn: endpointData.EndpointArn
};
var publishTextPromise = await sns.publish(paramsMessage).promise();
response.MessageId = publishTextPromise.MessageId;
response.result = 'Success';
}
catch (e) {
console.log(e.stack)
response.result = 'Error'
}
return response;
};
After some trials and errors I figured out the solution for my own question
1- The GCM part of the payload should be a string not a json
2- The message parameter should have an attribute that explicitly sets the mime type of the payload to Json
Taking all that into consideration
const GCM_data = {
'notification': {
'body': 'Hellow from lambda function',
'title': 'Notification Title'
}
}
const data = {
"GCM": JSON.stringify(GCM_data)
}
const snsPayload = JSON.stringify(data)
and the params should look like
const paramsMessage = {
Message: snsPayload,
TargetArn: endpointData.EndpointArn,
MessageStructure: 'json'
};
and this will work :)

How can I load a profile from .configuration file and assume a role based on the role arn?

I'm testing #aws-sdk version 3 and I'm trying to load a profile from my configuration file (which works fine) and then assume a role. This is where I can't figure out how to do that. In aws-sdk version 2 I do like this
AWS.config.credentials = new AWS.TemporaryCredentials({
RoleArn: 'arn:aws:iam::XXX:role/XXX',
}, new AWS.SharedIniFileCredentials({ profile: 'myprofile' }));
}
How do I do the same thing using the new sdk? To just pass the profile name I do this
const { SSM } = require('#aws-sdk/client-ssm-node');
const ssm = new SSM({
profile: 'myprofile'
});
I have installed both #aws-sdk/credential-provider-node and #aws-sdk/credential-provider-ini but with no success in figuring out how to pass the credentials like I want, the documentation here https://www.npmjs.com/package/#aws-sdk/credential-provider-node doesn't tell me much. So, how do I do that?
It turns out I looked at the wrong nuget package. To assume a role this is one way to do it.
const { SSM } = require('#aws-sdk/client-ssm-node');
const { STSClient, AssumeRoleCommand } = require('#aws-sdk/client-sts-node');
const sTS = new STSClient({ profile: 'myProfile' });
const params = {
RoleArn: 'arn:aws:iam::XXX:role/XXX',
RoleSessionName: 'tempUser',
};
const assumeRoleCommand = new AssumeRoleCommand(params);
let assumedRole = null;
try {
assumedRole = await sTS.send(assumeRoleCommand);
} catch (error) {
console.log(error);
}
const ssm = new SSM({
profile: 'myProfile',
credentials: {
accessKeyId: assumedRole.Credentials.AccessKeyId,
secretAccessKey: assumedRole.Credentials.SecretAccessKey,
expiration: assumedRole.Credentials.Expiration,
sessionToken: assumedRole.Credentials.SessionToken,
},
});

How to Get Signed S3 Url in AWS-SDK JS Version 3?

I am following the proposed solution by Trivikr for adding support for s3.getSignedUrl api which is not currently available in newer v3. I am trying to make a signed url for getting an object from bucket.
Just for convenience, the code is being added below:
const { S3, GetObjectCommand } = require("#aws-sdk/client-s3"); // 1.0.0-gamma.2 version
const { S3RequestPresigner } = require("#aws-sdk/s3-request-presigner"); // 0.1.0-preview.2 version
const { createRequest } = require("#aws-sdk/util-create-request"); // 0.1.0-preview.2 version
const { formatUrl } = require("#aws-sdk/util-format-url"); // 0.1.0-preview.1 //version
const fetch = require("node-fetch");
(async () => {
try {
const region = "us-east-1";
const Bucket = `SOME_BUCKET_NAME`;
const Key = `SOME_KEY_VALUE`;
const credentials = {
accessKeyId: ACCESS_KEY_HERE,
secretAccessKey: SECRET_KEY_HERE,
sessionToken: SESSION_TOKEN_HERE
};
const S3Client = new S3({ region, credentials, signatureVersion: 'v4' });
console.log('1'); // for quick debugging
const signer = new S3RequestPresigner({ ...S3Client.config });
console.log('2')
const request = await createRequest(
S3Client,
new GetObjectCommand({ Key, Bucket })
);
console.log('3');
let signedUrl = formatUrl(await signer.presign(request));
console.log(signedUrl);
let response = await fetch(signedUrl);
console.log("Response", response);
}catch(e) {
console.error(e);
}
I successfully create S3Client and signer but on creating request, I get the following error:
clientStack.concat(...).filter is not a function
Anything wrong I am doing?
Please also note that I am using webpack for bundling
Just add my example in TypeScript:
import { S3Client, GetObjectCommand, S3ClientConfig } from '#aws-sdk/client-s3';
import { getSignedUrl } from '#aws-sdk/s3-request-presigner';
const s3Configuration: S3ClientConfig = {
credentials: {
accessKeyId: '<ACCESS_KEY_ID>',
secretAccessKey: '<SECRET_ACCESS_KEY>'
},
region: '<REGION>',
};
const s3 = new S3Client(s3Configuration);
const command = new GetObjectCommand({Bucket: '<BUCKET>', Key: '<KEY>' });
const url = await getSignedUrl(s3, command, { expiresIn: 15 * 60 }); // expires in seconds
console.log('Presigned URL: ', url);
RESOLVED
I ended up successfully making the signed urls by installing the beta versions rather than preview (default) ones

Creating lambda function to create thumbnail for AWS s3 bucket is not working

I am learning using AWS lambda functions. What I am trying to do is, when I upload an image (JPEG) file to the s3 bucket, the image will be resized. But it is not working. See what I have done below.
I create a folder. Then created a node_modules folder inside the previously created folder. Then created a file called CreateThumbnail.js inside the main folder.
This is the CreateThumbnail.js:
// dependencies
var async = require('async');
var AWS = require('aws-sdk');
var gm = require('gm')
.subClass({ imageMagick: true }); // Enable ImageMagick integration.
var util = require('util');
// constants
var MAX_WIDTH = 100;
var MAX_HEIGHT = 100;
// get reference to S3 client
var s3 = new AWS.S3();
exports.handler = function(event, context, callback) {
// Read options from the event.
console.log("Reading options from event:\n", util.inspect(event, {depth: 5}));
var srcBucket = event.Records[0].s3.bucket.name;
// Object key may have spaces or unicode non-ASCII characters.
var srcKey =
decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " "));
var dstBucket = srcBucket + "resized";
var dstKey = "resized-" + srcKey;
// Sanity check: validate that source and destination are different buckets.
if (srcBucket == dstBucket) {
callback("Source and destination buckets are the same.");
return;
}
// Infer the image type.
var typeMatch = srcKey.match(/\.([^.]*)$/);
if (!typeMatch) {
callback("Could not determine the image type.");
return;
}
var imageType = typeMatch[1];
if (imageType != "jpg" && imageType != "png") {
callback('Unsupported image type: ${imageType}');
return;
}
// Download the image from S3, transform, and upload to a different S3 bucket.
async.waterfall([
function download(next) {
// Download the image from S3 into a buffer.
s3.getObject({
Bucket: srcBucket,
Key: srcKey
},
next);
},
function transform(response, next) {
gm(response.Body).size(function(err, size) {
// Infer the scaling factor to avoid stretching the image unnaturally.
var scalingFactor = Math.min(
MAX_WIDTH / size.width,
MAX_HEIGHT / size.height
);
var width = scalingFactor * size.width;
var height = scalingFactor * size.height;
// Transform the image buffer in memory.
this.resize(width, height)
.toBuffer(imageType, function(err, buffer) {
if (err) {
next(err);
} else {
next(null, response.ContentType, buffer);
}
});
});
},
function upload(contentType, data, next) {
// Stream the transformed image to a different S3 bucket.
s3.putObject({
Bucket: dstBucket,
Key: dstKey,
Body: data,
ContentType: contentType
},
next);
}
], function (err) {
if (err) {
console.error(
'Unable to resize ' + srcBucket + '/' + srcKey +
' and upload to ' + dstBucket + '/' + dstKey +
' due to an error: ' + err
);
} else {
console.log(
'Successfully resized ' + srcBucket + '/' + srcKey +
' and uploaded to ' + dstBucket + '/' + dstKey
);
}
callback(null, "message");
}
);
};
Then I zipped the folder. Then I created a function in the AWS lambda console and upload the zip file from the UI as follow.
Then I added the s3 trigger as in the screenshot.
I also created the role with correct permission and policies.
But when I upload a JPG file to s3 bucket, it is neither resized not thumbnail is created. What could be wrong here?
This is the function policy:
Lambda functions send their debug information to Amazon CloudWatch Logs. Examine the log file to determine what went wrong.
If there is no log, then either the Lambda function never executed, or the Lambda function was not given sufficient permissions to write to CloudWatch Logs.
See: Accessing Amazon CloudWatch Logs for AWS Lambda