AWS Upload to S3 through Lambda - amazon-web-services

Trying to upload pictures to S3 from lambda. I get a return code of 200 when uploading the picture from my phone but the image is never uploaded to the bucket? Is it something to do with a bucket policy? The lambda function :
const AWS = require('aws-sdk');
AWS.config.update({
region: 'us-west-2'
})
const s3 = new AWS.S3();
exports.handler = async (event, context, callback) => {
AWS.config.update({region: 'us-west-2'});
// var buf = Buffer.from(event.picture.imageBinary.replace(/^data:image\/\w+;base64,/, ""),'base64')
let encodedImage =JSON.parse(event.picture);
let decodedImage = Buffer.from(encodedImage, 'base64');
var filePath = "avatars/" + event.userid + ".jpg"
var params = {
Body: decodedImage,
Bucket: 'testpictures-1',
Key: filePath,
ContentEncoding: 'base64',
ContentType: 'image/jpeg'
};
s3.putObject(params, function(err, data) {
if (err){ console.log(err, err.stack);} // an error occurred
else {console.log(data);} // successful response
});
};

Because you are trying to invoke the Amazon S3 Service from a Lambda function, you must ensure that the IAM role associated with the Lambda function has the correct S3 policies. If the IAM role does not have the policies related to an AWS Service , then you cannot successfully invoke the AWS Service from a Lambda function. Here is an AWS tutorial (implemented in Java) that discusses this point.
https://github.com/awsdocs/aws-doc-sdk-examples/tree/master/javav2/usecases/creating_scheduled_events

Related

How to use a s3 trigger to invoke a lambda function for a specific folder in s3 bucket

I created a lambda function using blueprint- s3-get-object-python. I want to the get the function invoked only when I upload a file into the certain folder of the s3 bucket I created. I tried adding prefix with -"foldername/" while creating the trigger, but it is not work.
Below is the code which my lambda function contains. I know the code isnt the matter here.
console.log('Loading function');
const aws = require('aws-sdk');
const s3 = new aws.S3({ apiVersion: '2006-03-01' });
exports.handler = async (event, context) => {
//console.log('Received event:', JSON.stringify(event, null, 2));
// Get the object from the event and show its content type
const bucket = event.Records[0].s3.bucket.name;
const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' '));
const params = {
Bucket: bucket,
Key: key,
};
try {
const { ContentType } = await s3.getObject(params).promise();
console.log('CONTENT TYPE:', ContentType);
return ContentType;
} catch (err) {
console.log(err);
const message = `Error getting object ${key} from bucket ${bucket}. Make sure they exist and your bucket is in the same region as this function.`;
console.log(message);
throw new Error(message);
}
};
You are probably missing the proper Resource-based policy for your Lambda. That is something you configure in your Lambdas configuration.
It is a configuration granting the S3 service permission to invoke your Lambda function. Without this permission, the S3 service won't be able to invoke your Lambda.
If you use the AWS CLI you could do it like this:
aws \
lambda \
add-permission \
--function-name <your-function-name> \
--action lambda:InvokeFunction \
--statement-id allow-s3-invoke \
--principal s3.amazonaws.com \
--source-arn <your-bucket-arn> \
--source-account <your-account-id>
You can find the relevant documentation here:
https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html

Uploading file to s3 bucket without using AWS console using nodeJs

I am new to AWS technology, i need to do above assignment so i want to know procedure do complete above task. What are prerequisites software and tools should i required.
Please suggest me in simple way. TIN Happy Coding
you have to configure your AWS credential before suing Aws with aws-sdk npm package using below method
import AWS from "aws-sdk";
const s3 = new AWS.S3({
accessKeyId: YOUR_ACCESS_KEY_ID_OF_AMAZON,
secretAccessKey: YOUR_AMAZON_SECRET_ACCESS_KEY,
signatureVersion: "v4",
region: YOUR_AMAZON_REGION // country
});
export { s3 };
then call s3 and do upload request using below code
const uploadReq: any = {
Bucket: "YOUR_BUCKET"
Key: "FILE_NAME",
Body: "FILE_STREAM",
ACL: "public-read", //ACCESSIBLE TO REMOTE LOCATION
};
await new Promise((resolve, reject) => {
s3.upload(uploadReq).send(async (err: any, data: any) => {
if (err) {
console.log("err", err);
reject(err);
} else {
//database call
resolve("STATUS");
}
});
});

Get result from Amazon Transcribe directly (serverless)

I use serverless Lambda services to transcribe from speech to text with Amazon Transcribe. My current scripts are able to transcribe file from S3 and store the result as a JSON file also in S3.
Is there a possibility to get the result directly, because I want to store it in a database (PostgreSQL in AWS RDS)?
Thank you for your hints
serverless.yml
...
provider:
name: aws
runtime: nodejs10.x
region: eu-central-1
memorySize: 128
timeout: 30
environment:
S3_AUDIO_BUCKET: ${self:service}-${opt:stage, self:provider.stage}-records
S3_TRANSCRIPTION_BUCKET: ${self:service}-${opt:stage, self:provider.stage}-transcriptions
LANGUAGE_CODE: de-DE
iamRoleStatements:
- Effect: Allow
Action:
- s3:PutObject
- s3:GetObject
Resource:
- 'arn:aws:s3:::${self:provider.environment.S3_AUDIO_BUCKET}/*'
- 'arn:aws:s3:::${self:provider.environment.S3_TRANSCRIPTION_BUCKET}/*'
- Effect: Allow
Action:
- transcribe:StartTranscriptionJob
Resource: '*'
functions:
transcribe:
handler: handler.transcribe
events:
- s3:
bucket: ${self:provider.environment.S3_AUDIO_BUCKET}
event: s3:ObjectCreated:*
createTextinput:
handler: handler.createTextinput
events:
- http:
path: textinputs
method: post
cors: true
...
resources:
Resources:
S3TranscriptionBucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: ${self:provider.environment.S3_TRANSCRIPTION_BUCKET}
...
handler.js
const db = require('./db_connect');
const awsSdk = require('aws-sdk');
const transcribeService = new awsSdk.TranscribeService();
module.exports.transcribe = (event, context, callback) => {
const records = event.Records;
const transcribingPromises = records.map((record) => {
const recordUrl = [
'https://s3.amazonaws.com',
process.env.S3_AUDIO_BUCKET,
record.s3.object.key,
].join('/');
// create random filename to avoid conflicts in amazon transcribe jobs
function makeid(length) {
var result = '';
var characters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789';
var charactersLength = characters.length;
for ( var i = 0; i < length; i++ ) {
result += characters.charAt(Math.floor(Math.random() * charactersLength));
}
return result;
}
const TranscriptionJobName = makeid(7);
return transcribeService.startTranscriptionJob({
LanguageCode: process.env.LANGUAGE_CODE,
Media: { MediaFileUri: recordUrl },
MediaFormat: 'wav',
TranscriptionJobName,
//MediaSampleRateHertz: 8000, // normally 8000 if you are using wav file
OutputBucketName: process.env.S3_TRANSCRIPTION_BUCKET,
}).promise();
});
Promise.all(transcribingPromises)
.then(() => {
callback(null, { message: 'Start transcription job successfully' });
})
.catch(err => callback(err, { message: 'Error start transcription job' }));
};
module.exports.createTextinput = (event, context, callback) => {
context.callbackWaitsForEmptyEventLoop = false;
const data = JSON.parse(event.body);
db.insert('textinputs', data)
.then(res => {
callback(null,{
statusCode: 200,
body: "Textinput Created! id: " + res
})
})
.catch(e => {
callback(null,{
statusCode: e.statusCode || 500,
body: "Could not create a Textinput " + e
})
})
};
I think your best option is to trigger a lambda from the s3 event when a transcription is stored, and then post the data to your database. As Dunedan mentioned, you can't go directly from transcribe to a DB.
You can add the event to a lambda via serverless like so:
storeTranscriptonInDB:
handler: index.storeTransciptInDB
events:
- s3:
bucket: ${self:provider.environment.S3_TRANSCRIPTION_BUCKET}
rules:
- suffix: .json
The s3 key for the transcript file will be event.Records[#].s3.object.key
I would loop through the records to be thorough, and for each do something like this:
const storeTransciptInDB = async (event, context, callback) => {
const records = event.Records;
for (record of event.Records) {
let key = record.s3.object.key;
let params = {
Bucket: record.s3.bucket.name,
Key: key
}
let transcriptFile = await s3.getObject(params).promise();
let transcriptObject = JSON.parse(data.Body.toString("utf-8"));
let transcriptResults = transcriptObject.results.transcripts;
let transcript = "";
transcriptResults.forEach(result => (transcript += result.transcript + " "));
// at this point you can post the transcript variable to your database
}
}
Amazon Transcribe currently only supports storing transcriptions in S3, as explained in the API definition for StartTranscriptionJob. There is one special case though: If you don't want to manage your own S3 bucket for transcriptions, you can just leave out the OutputBucketName and the transcription will be stored in an AWS-managed S3 bucket. In that case you'd get a pre-signed URL allowing you to download the transcription.
As transcribing happens asynchronously I suggest you create a second AWS Lambda function, triggered by a CloudWatch Event which gets emitted when the state of your transcription changes (as explained in Using Amazon CloudWatch Events with Amazon Transcribe) or by a S3 notification (Using AWS Lambda with Amazon S3). This AWS Lambda function can then fetch the finished transcription from S3 and store its contents in PostgreSQL.

Can't upload file to S3 with Postman using pre-signed URL. Error: SignatureDoesNotMatch

How do I upload a file to S3 with a signed URL?
I tried the following:
const AWS = require('aws-sdk');
const s3 = new AWS.S3({ accessKeyId: "", secretAccessKey: "" });
const url = s3.getSignedUrl("putObject", {
Bucket: "SomeBucketHere",
Key: "SomeNameHere",
ContentType: "binary/octet-stream",
Expires: 600
});
But when I try uploading with Postman using the following steps, I get the SignatureDoesNotMatch error.
PUT method with URL from the above code
Body: binary (radio button), choose file, select a file to upload
Hit Send
I can confirm that the IAM permissions are not the problem here. I have complete access to the Bucket.
What's wrong and how do I test my signed URL?
This issue once caused me a lot of pain to get around.
All I had to do was add a header to the Postman request.
Header: Content-Type = binary/octet-stream
Once I changed this, the file uploads successfully.
Hope this saves someone a lot of trouble down the road.
Not sure if it's of any help anymore, but I believe you need to pass in the signature version.
In python you have something like:
import boto3 as boto3
from botocore.client import Config
...
s3_client = boto3.client('s3', config=Config(signature_version='s3v4'))
url = s3_client.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': bucket,
'Key': key,
},
ExpiresIn=60
)
print url
So the equivalent in Javascript would be:
var AWS = require('aws-sdk');
var s3 = new AWS.S3({
signatureVersion: 'v4',
});
exports.handler = (event, context, callback) => {
const url = s3.getSignedUrl('putObject', {
Bucket: '**\[YOUR-S3-BUCKET\]**',
Key: 'mykey',
Expires: 10,
});
callback(null, url);
};
Sources:
Presigned URL for S3 Bucket Expires Before Specified Expiration Time
Using pre-signed URLs to upload a file to a private S3 bucket

Create Lambda function which will parse the emails which uploaded to S3 with SES receipt rule

I would like to create Lambda function which will parse the emails which uploaded to S3 bucket through SES receipt rule.
Uploading to S3 bucket through SES receipt rule works fine. So, its tested already and confirmed that it uploads the file correctly.
My Amazon Lambda function:
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
var bucketName = 'bucket_name/folder/destination';
exports.handler = function(event, context, callback) {
console.log('Process email');
var sesNotification = event.Records[0].ses;
console.log("SES Notification:\n", JSON.stringify(sesNotification, null, 2));
// Retrieve the email from your bucket
s3.getObject({
Bucket: bucketName,
Key: sesNotification.mail.messageId
}, function(err, data) {
if (err) {
console.log(err, err.stack);
callback(err);
} else {
console.log("Raw email:\n" + data.Body);
// Custom email processing goes here
callback(null, null);
}
});
};
When there is a file upload it triggers the lambda but I get an [SignatureDoesNotMatch] error:
{
"errorMessage": "The request signature we calculated does not match the signature you provided. Check your key and signing method.",
"errorType": "SignatureDoesNotMatch",
"stackTrace": [
"Request.extractError (/var/runtime/node_modules/aws-sdk/lib/services/s3.js:524:35)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:615:14)",
"Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)",
"AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)",
"/var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10",
"Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)",
"Request. (/var/runtime/node_modules/aws-sdk/lib/request.js:617:12)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)"
]
}
If anyone can help me to approach this problem, it will be great!
Thanks
Okay I solved it. The bucketName should contain only the bucket's name. But the key can contain the rest of the route to your file's exact "directory".
So, basically bucketName should not contain subfolders but key should.
Thanks
For reference: AWS Lambda S3 GET/POST - SignatureDoesNotMatch error