AWS s3 NoSuchKey error after retrieving file which was just uploaded - amazon-web-services

I uploaded a file to my s3 bucket and tried to read the file immediately after upload. I most of the time get "err NoSuchKey: The specified key does not exist". i check the bucket using the console and the file actually exist.
After refreshing the page, the file is able to be read.
Aws region is US East (N Virginia).
File is uploaded with a private read.
export function uploadFile(absolutePath: string, fileBuffer: Buffer, callback: (err, result) => void) {
try {
let uploadParams: awsSdk.S3.PutObjectRequest = {
Bucket: cfg.aws[process.env.NODE_ENV].bucket,
Key: absolutePath,
Body: fileBuffer,
ACL: 'private',
CacheControl: 'public, max-age=2628000'
}
s3.upload(uploadParams, function (err, result) {
if (err) {
Util.logError('Aws Upload File', err)
}
return callback(err, result)
})
} catch (err) {
Util.logError('Aws Upload File', err)
return callback(err, null)
}
}
export function obtainObjectOutput(absolutePath: string, callback: (err, result: awsSdk.S3.GetObjectOutput) => void) {
let getParaams: awsSdk.S3.GetObjectRequest = {
Bucket: cfg.aws[process.env.NODE_ENV].bucket,
Key: absolutePath
}
s3.getObject(getParaams, (error, result) => {
(error) ? callback(error, null) : callback(null, result)
})
}

The number one reason that S3 GetObject fails after an upload is that the GetObject request actually happened before the upload completed, This is easy to do in async JavaScript.

Related

AWS S3 Readstream returns no data

I have an application on which files are uploaded to S3, and an event is triggered to process them by a Lambda function.
When a file is uploaded I may see the function execution on Cloud Watch logs, however no data is returned, no error is thrown and the on('end') handler is never called.
The files being processed are .csv, and I'm able to open them and check the contents manually.
Any ideas on what may be happening?
This is my code:
let es = require('event-stream');
let readStream = s3.getObject({
Bucket: event.Records[0].s3.bucket.name,
Key: event.Records[0].s3.object.key
}).createReadStream();
readStream
.pipe(es.split())
.pipe(es.mapSync(function (line) {
console.log(line);
processLine(line);
}))
.on('end', async () => {
console.log('ON END');
callback(null, 'OK');
})
.on('error', (err) => {
console.error(JSON.stringify(err));
callback(JSON.stringify(err));
})
When a Node.js Lambda reaches the end of the main thread, it ends all other threads.
Make your handler async and then promisify the last call as
new Promise((resolve) => {
readStream
.pipe(es.split())
.pipe(es.mapSync(function (line) {
console.log(line);
processLine(line);
}))
.on('end', async () => {
console.log('ON END');
callback(null, 'OK');
})
.on('error', (err) => {
console.error(JSON.stringify(err));
callback(JSON.stringify(err));
resolve();
})
}

How to save multipart form data to a file in AWS Lambda

I am writing a serverless api using Node.js and Typescript. I want to send a .xlsx file from the client side and save the file in S3 bucket.
I tried using the npm package busboy. And the file that saved in S3 bucket cannot be read(corrupted or not written fully). Looks like I haven't write the file correctly.
Here's my code
import * as busboy from 'busboy';
import AWS = require('aws-sdk');
import { S3 } from 'aws-sdk';
export const saveExcel: Handler = (event: APIGatewayEvent, context: Context, cb: Callback) => {
const contentType = event.headers['Content-Type'] || event.headers['content-type'];
const bb: busboy.Busboy = new busboy({ headers: { 'content-type': contentType } });
bb.on('file', function (fieldname, file, filename, encoding, mimetype) {
file
.on('data', data => {
const params: S3.PutObjectRequest = {
Bucket: 'bucket-name',
Key: filename,
Body: data
};
s3.upload(params, function (err, data) {
if (err) {
console.log(err);
} else {
console.log(data);
}
})
})
.on('end', () => {
console.log("done");
});
});
bb.end(event.body);
}
What am I doing wrong? Or do I have to try any other library?

How to abort upload in AWS S3 javascript SDK?

I'm new to AWS, and I need to cancel the upload process using JS SDK, and there is a code sample where I have tried to upload an image and cancel the upload process:
var test = s3.upload({
Key: fileKey,
Body: myfile,
ACL: 'private'
},
function (err, data) {
if (err) {
} else {
}
});
$("#cancel_Button").on("click", function () {
s3.abortMultipartUpload(
params = {
Bucket: data.bucket,
Key: data.key,
UploadId: undefined
},
function (err, data) {
if (err)
console.log(err, err.stack);
else
console.log(data);
}
)
});

$cordovaFileTransfer is not working when upload file to s3 Bucket

I have tried to upload mp4 file to aws s3 bucket using $cordovaFileTransfer plugin.
This is mycode
bucket.getSignedUrl('putObject', {
Key: 'uploads/ghdfdgjfjhs.mp4'
}, function(err, url) {
console.log('The URL is', url);
document.addEventListener('deviceready', function() {
$cordovaFileTransfer.upload(url, $scope.clip, {})
.then(function(result) {
console.log(result);
}, function(err) {
console.log(err);
}, function(progress) {
});
}, false);
});
Signed url is return successfully.
But return following error
ssl=0xaf974000: I/O error during system call, Connection reset by peer
How to solve this error?

Pushing AWS Lambda data to Kinesis Stream

Is there are way to push data from a Lambda function to a Kinesis stream? I have searched the internet but have not found any examples related to it.
Thanks.
Yes, you can send information from Lambda to Kinesis Stream and it is very simple to do. Make sure you are running Lambda with the right permissions.
Create a file called kinesis.js, This file will provide a 'save' function that receives a payload and sends it to the Kinesis Stream. We want to be able to include this 'save' function anywhere we want to send data to the stream. Code:
const AWS = require('aws-sdk');
const kinesisConstant = require('./kinesisConstants'); //Keep it consistent
const kinesis = new AWS.Kinesis({
apiVersion: kinesisConstant.API_VERSION, //optional
//accessKeyId: '<you-can-use-this-to-run-it-locally>', //optional
//secretAccessKey: '<you-can-use-this-to-run-it-locally>', //optional
region: kinesisConstant.REGION
});
const savePayload = (payload) => {
//We can only save strings into the streams
if( typeof payload !== kinesisConstant.PAYLOAD_TYPE) {
try {
payload = JSON.stringify(payload);
} catch (e) {
console.log(e);
}
}
let params = {
Data: payload,
PartitionKey: kinesisConstant.PARTITION_KEY,
StreamName: kinesisConstant.STREAM_NAME
};
kinesis.putRecord(params, function(err, data) {
if (err) console.log(err, err.stack);
else console.log('Record added:',data);
});
};
exports.save = (payload) => {
const params = {
StreamName: kinesisConstant.STREAM_NAME,
};
kinesis.describeStream(params, function(err, data) {
if (err) console.log(err, err.stack);
else {
//Make sure stream is able to take new writes (ACTIVE or UPDATING are good)
if(data.StreamDescription.StreamStatus === kinesisConstant.STATE.ACTIVE
|| data.StreamDescription.StreamStatus === kinesisConstant.STATE.UPDATING ) {
savePayload(payload);
} else {
console.log(`Kinesis stream ${kinesisConstant.STREAM_NAME} is ${data.StreamDescription.StreamStatus}.`);
console.log(`Record Lost`, JSON.parse(payload));
}
}
});
};
Create a kinesisConstant.js file to keep it consistent :)
module.exports = {
STATE: {
ACTIVE: 'ACTIVE',
UPDATING: 'UPDATING',
CREATING: 'CREATING',
DELETING: 'DELETING'
},
STREAM_NAME: '<your-stream-name>',
PARTITION_KEY: '<string-value-if-one-shard-anything-will-do',
PAYLOAD_TYPE: 'String',
REGION: '<the-region-where-you-have-lambda-and-kinesis>',
API_VERSION: '2013-12-02'
}
Your handler file: we added the 'done' function to send a response to whoever wants to send the data to the stream but 'kinesis.save(event)' does all the work.
const kinesis = require('./kinesis');
exports.handler = (event, context, callback) => {
console.log('LOADING handler');
const done = (err, res) => callback(null, {
statusCode: err ? '400' : '200',
body: err || res,
headers: {
'Content-Type': 'application/json',
},
});
kinesis.save(event); // here we send it to the stream
done(null, event);
}
This should be done exactly like doing it on your computer.
Here's an example in nodejs:
let aws = require('aws');
let kinesis = new aws.Kinesis();
// data that you'd like to send
let data_object = { "some": "properties" };
let data = JSON.stringify(data_object);
// push data to kinesis
const params = {
Data: data,
PartitionKey: "1",
StreamName: "stream name"
}
kinesis.putRecord(params, (err, data) => {
if (err) console.error(err);
else console.log("data sent");
}
Please note, this piece of code will not work, as the Lambda has no permissions to your stream.
When accessing AWS resources through Lambda, it is better to use IAM roles;
When configuring a new Lambda, you can choose existing / create a role.
Go to IAM, then Roles, and pick the role name you assigned to the Lambda function.
Add the relevant permissions (putRecord, putRecords).
Then, test the Lambda.
Yes, this can be done, I was trying to accomplish the same thing and was able to do so in Lambda using Node.js 4.3 runtime, and it also works in version 6.10.
Here is the code:
Declare the following at the top of your Lambda function:
var AWS = require("aws-sdk");
var kinesis = new AWS.Kinesis();
function writeKinesis(rawdata){
data = JSON.stringify(rawdata);
params = {Data: data, PartitionKey: "<PARTITION_KEY>", StreamName: "<STREAM_NAME>"};
kinesis.putRecord(params, (err, data) => {
if (err) console.error(err);
else console.log("data sent");
});
}
Now, in the exports.handler, call the function:
writeKinesis(<YOUR_DATA>);
A few things to note... for Kinesis to ingest data, it must be encoded. In the example below, I have function that takes logs from CloudWatch, and sends them over to a Kinesis stream.
Note that I'm inserting the contents of buffer.toString('utf8') into the writeKinesis function:
exports.handler = function(input, context) {
...
var zippedInput = new Buffer(input.awslogs.data, 'base64');
zlib.gunzip(zippedInput, function(error, buffer) {
...
writeKinesis(buffer.toString('utf8'));
...
}
...
}
Finally, in IAM, configure the appropriate permissions. Your Lambda function has to run within the context of an IAM role that includes the following permissions below. In my case, I just modified the default lambda_elasticsearch_execution role to include a policy called "lambda_kinesis_execution" with the following code:
"Effect": "Allow",
"Action": [
"kinesis:*"
],
"Resource": [
"<YOUR_STREAM_ARN>"
]