AWS S3 Bucket Upload using CollectionFS and cfs-s3 meteor package - amazon-web-services

I am using Meteor.js with Amazon S3 Bucket for uploading and storing photos. I am using the meteorite packges collectionFS and aws-s3. I have setup my aws-s3 connection correctly and the images collection is working fine.
Client side event handler:
'click .submit': function(evt, templ) {
var user = Meteor.user();
var photoFile = $('#photoInput').get(0).files[0];
if(photoFile){
var readPhoto = new FileReader();
readPhoto.onload = function(event) {
photodata = event.target.result;
console.log("calling method");
Meteor.call('uploadPhoto', photodata, user);
};
}
And my server side method:
'uploadPhoto': function uploadPhoto(photodata, user) {
var tag = Random.id([10] + "jpg");
var photoObj = new FS.File({name: tag});
photoObj.attachData(photodata);
console.log("s3 method called");
Images.insert(photoObj, function (err, fileObj) {
if(err){
console.log(err, err.stack)
}else{
console.log(fileObj._id);
}
});
The file that is selected is a .jpg image file but upon upload I get this error on the server method:
Exception while invoking method 'uploadPhoto' Error: DataMan constructor received data that it doesn't support
And no matter whether I directly pass the image file, or attach it as data or use the fileReader to read as text/binary/string. I still get that error. Please advise.

Ok, maybe some thoughts. I have done things with collectionFS some months ago, so take care to the docs, because my examples maybe not 100% correct.
Credentials should be set via environment variables. So your key and secret is available on server only. Check this link for further reading.
Ok first, here is some example code which is working for me. Check yours for differences.
Template helper:
'dropped #dropzone': function(event, template) {
addImage(event);
}
Function addImage:
function addImagePreview(event) {
//Go throw each file,
FS.Utility.eachFile(event, function(file) {
//Some Validationchecks
var reader = new FileReader();
reader.onload = (function(theFile) {
return function(e) {
var fsFile = new FS.File(image.src);
//setMetadata, that is validated in collection
//just own user can update/remove fsFile
fsFile.metadata = {owner: Meteor.userId()};
PostImages.insert(fsFile, function (err, fileObj) {
if(err) {
console.log(err);
}
});
};
})(file);
// Read in the image file as a data URL.
reader.readAsDataURL(file);
});
}
Ok, your next point is the validation. The validation can be done with allow/deny rules and with a filter on the FS.Collection. This way you can do all your validation AND insert via client.
Example:
PostImages = new FS.Collection('profileImages', {
stores: [profileImagesStore],
filter: {
maxSize: 3145728,
allow: {
contentTypes: ['image/*'],
extensions: ['png', 'PNG', 'jpg', 'JPG', 'jpeg', 'JPEG']
}
},
onInvalid: function(message) {
console.log(message);
}
});
PostImages.allow({
insert: function(userId, doc) {
return (userId && doc.metadata.owner === userId);
},
update: function(userId, doc, fieldNames, modifier) {
return (userId === doc.metadata.owner);
},
remove: function(userId, doc) {
return false;
},
download: function(userId) {
return true;
},
fetch: []
});
Here you will find another example click
Another point of error is maybe your aws configuration. Have you done everything like it is written here?
Based on this post click it seems that this error occures when FS.File() is not constructed correctly. So maybe this should be you first way to start.
A lot for reading so i hope this helps you :)

Related

How to update an item after being newly created in AWS DynamoDB and Amplify

I am trying to update a query in AWS Dynamo using AWS Amplify on top of Next.js.
My scenario is simple.
On page load, if there exists a user and the user has not visited a page before, a new object will be created with set values using SWR.
const fetchUserSite = async (owner, code) => {
try {
// Create site object if no site exists
if (userData == null) {
const siteInfo = {
id: uuidv4(),
code: parkCode,
owner: user?.username,
bookmarked: false,
visited: false,
}
await API.graphql({
query: createSite,
variables: {input: siteInfo},
authMode: 'AMAZON_COGNITO_USER_POOLS',
})
console.log(`${code} added for the first time`)
}
return userData || null
} catch (err) {
console.log('Site not added by user', data, err)
}
}
// Only call the fetchUserSite method if `user` exists
const {data} = useSWR(user ? [user?.username, parkCode] : null, fetchUserSite)
Currently, this works. The object is added to the database with the above attributes. HOWEVER, when I click a button to update this newly created object, I get an error of path: null, locations: (1) […], message: "Variable 'input' has coerced Null value for NonNull type 'ID!'"
This is my call to update the object when I click a button with the onClick handler "handleDBQuery".
const handleDBQuery = async () => {
await API.graphql({
query: updateSite,
variables: {
input: {
id: data?.id,
bookmarked: true,
owner: user?.username,
},
},
authMode: 'AMAZON_COGNITO_USER_POOLS',
})
console.log(`${name} Bookmarked`)
}
My hunch is that the updateSite query does not know about the createSite query on page load.
In short, how can I update an item after I just created it?
I looked into the code at master branch and follow along as you describe. I found that the data?.id here comes from a state variable and it is set only before the call to createSite. I suggest you try setId again using the data returned from the createSite
Try this
const fetchUserSite = async (owner, code) => {
try {
// Create site object if no site exists
if (userData == null) {
const siteInfo = {
id: uuidv4(),
code: parkCode,
owner: user?.username,
bookmarked: false,
visited: false,
}
const { data: newData } = await API.graphql({
query: createSite,
variables: {input: siteInfo},
authMode: 'AMAZON_COGNITO_USER_POOLS',
});
setId(newData.id); // <====== here (or setId(siteInfo.id))
console.log(`${code} added for the first time`)
return newData; // <======= and this, maybe? (you may have to modify the qraphql query to make it return the same item as in the listSite
}
return userData || null
} catch (err) {
console.log('Site not added by user', data, err)
}
}

is there a way to get string (data) from text file stored in s3 in Alexa localisation.js file?

Problem:
I am trying to get the data from a text file stored in s3, I get it right in intent handler using a sync await but I want to get string in localisation file as I am trying to implement the solution in 2 languages.
I am getting err saying skill does not respond correctly.
This is file.js
const AWS = require('aws-sdk');
//========================
// This step is not required if you are running your code inside lambda or in
// the local environment that has AWS set up
//========================
const s3 = new AWS.S3();
async function getS3Object (bucket, objectKey) {
try {
const params = {
Bucket: 'my-bucket',
Key: 'file.txt',
};
const data = await s3.getObject(params).promise();
let dat = data.Body.toString('utf-8');
return dat;
} catch (e) {
throw new Error(`Could not retrieve file from S3: ${e.message}`);
}
}
module.exports = getS3Object;
this is the localisation.js file code
const dataText = require('file.js');
async let textTitle = await dataText().then(); **// this does not work**
module.exports = {
en: {
translation: {
WELCOME_BACK_MSG : textTitle,
}
},
it: {
translation: {
WELCOME_MSG: textTitle,
}
}
}
The problem is that in your localisation.js file you are trying to export something that is obtained via an asynchronous function call, but you cannot do that directly, module.exports is assigned and returned synchronously. Please, see for instance this SO question and answer for an in-deep background.
As you are mentioning Alexa skill, and for the name of the file, localisation.js, I assume you are trying something similar to the solution proposed in this GitHub repository.
Analyzing the content of the index.js file they provide, it seems the library is using i18next for localisation.
The library provides the concept of backend if you need to load your localisation information from an external resource.
You can implement a custom backend, although the library offers one that could fit your needs, i18next-http-backend.
As indicated in the documentation, you can configure the library to fetch your localization resources with this backend with something like the following:
import i18next from 'i18next';
import Backend from 'i18next-http-backend';
i18next
.use(Backend)
.init({
backend: {
// for all available options read the backend's repository readme file
loadPath: '/locales/{{lng}}/{{ns}}.json'
}
});
Here in SO you can find a more complete example.
You need to provide a similar configuration to the localisation interceptor provided in the Alexa skill example project, perhaps something like:
import HttpApi from 'i18next-http-backend';
/**
* This request interceptor will bind a translation function 't' to the handlerInput
*/
const LocalizationInterceptor = {
process(handlerInput) {
const localisationClient = i18n
.use(HttpApi)
.init({
lng: Alexa.getLocale(handlerInput.requestEnvelope),
// resources: languageStrings,
backend: {
loadPath: 'https://your-bucket.amazonaws.com/locales/{{lng}}/translations.json',
crossDomain: true,
},
returnObjects: true
});
localisationClient.localise = function localise() {
const args = arguments;
const value = i18n.t(...args);
if (Array.isArray(value)) {
return value[Math.floor(Math.random() * value.length)];
}
return value;
};
handlerInput.t = function translate(...args) {
return localisationClient.localise(...args);
}
}
};
Please, be aware that instead of a text file you need to return a valid son file with the appropriate translations:
{
"WELCOME_MSG" : "Welcome!!",
"WELCOME_BACK_MSG" : "Welcome back!!"
}

Dropzone.js + AWS S3 stalling queue

I'm trying to impliment a dropzone.js uploader to amazon S3 using the aws-sdk.js for the browser. But when I exceed the 'parallelUploads' maximum in the settings, the queue never completes. I'm using the approach in the following link:
amazon upload
relevant parts of my code:
var dz = new Dropzone("#DZContainer", {
acceptedFiles: "image/*,.jpg,.jpeg,.png,.gif",
autoQueue: true,
autoProcessQueue: true,
parallelUploads: 10,
clickable: [".uploadButton"],
accept: function(file, done){
let params = {
"Bucket": "upload-bucket",
"Key": getFullKey(file.name),
Body: file,
Region: "us-east-1,
ContentType: file.type
}
file.s3upload = AWS.S3.ManagedUpload(params);
if (typeof(done) === 'function') done();
},
canceled: function(file) {
if (file.s3upload) file.s3upload.abort();
},
init: function () {
this.on('removedfile', function (file) {
if (file.s3upload) file.s3upload.abort();
});
}
)
dz.uploadFiles = function (files) {
for (var j = 0; j < files.length; j++) {
var file = files[j];
dz.SendFile(file);
}
};
dz.SendFile = function(file) {
file.s3upload.send(function (err, data) {
if (err) {
console.err(err)
dz.emit("error", file, err.message);
} else {
dz.emit("complete", file);
}
});
if I drag in (or use the clickable) more than 10 files, the first 10 complete but it never processes the rest of the queue. What am I missing? All help is appreciated
EDIT: With a little more digging into Dropzone, it looks as though the file status is never getting set to complete. I see a function called _finished() in the dropzone code, but I'm having a hard time figuring out what specifically is supposed to trigger that function. I have tried dz.emit("complete", file) listed below as well as adding dz.emit("success",file) but my breakpoint at the first line of the _finished() function never triggers. Thus the file.status never gets set to completed.
Does anyone know when/what/how _finished() is supposed to be run?
As mentioned in the edit, I was able to track down where the .status was not properly getting set. This seemed to be in a private Dropzone function called _finished()
With further examination, I noticed that _finished() seemed to also be calling emit("complete", file) after setting file.status to Dropzone.SUCCESS and also emitting "success". It then checks if autoProcessQueue is set and if it is, returns the result of a processQueue() call.
I had a hard time figuring out what triggered this function as it was on an onload event that eventually realized was tied to an XHTTPRequest object used by the internal uploader (which is being overridden by the S3 uploader)
So I modified the function to emulate what the Dropzone._finished() was doing and it's behaving as expected:
dz.SendFile = function(file) {
file.s3upload.send(function (err, data) {
if (err) {
console.err(err)
dz.emit("error", file, err.message);
} else {
file.status = Dropzone.SUCCESS;
dz.emit("success", file, data, err);
dz.emit("complete", file);
if(dz.options.autoProcessQueue)
dz.processQueue()
}
});

Connect AWS mobile backend to DynamoDB

I am trying to use AWS mobile backend (using lambda function) to insert into dynamoDB (also configured at the mobile backend) but with no success so far.
The relevant code:
'use strict';
console.log("Loading function");
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region:process.env.MOBILE_HUB_PROJECT_REGION});
exports.handler = function(event, context, callback) {
var responseCode = 200;
var requestBody, pathParams, queryStringParams, headerParams, stage,
stageVariables, cognitoIdentityId, httpMethod, sourceIp, userAgent,
requestId, resourcePath;
console.log("request: " + JSON.stringify(event));
// Request Body
requestBody = event.body;
if (requestBody !== undefined && requestBody !== null) {
// Set 'test-status' field in the request to test sending a specific response status code (e.g., 503)
responseCode = JSON.parse(requestBody)['test-status'];
}
// Path Parameters
pathParams = event.path;
// Query String Parameters
queryStringParams = event.queryStringParameters;
// Header Parameters
headerParams = event.headers;
if (event.requestContext !== null && event.requestContext !== undefined) {
var requestContext = event.requestContext;
// API Gateway Stage
stage = requestContext.stage;
// Unique Request ID
requestId = requestContext.requestId;
// Resource Path
resourcePath = requestContext.resourcePath;
var identity = requestContext.identity;
// Amazon Cognito User Identity
cognitoIdentityId = identity.cognitoIdentityId;
// Source IP
sourceIp = identity.sourceIp;
// User-Agent
userAgent = identity.userAgent;
}
// API Gateway Stage Variables
stageVariables = event.stageVariables;
// HTTP Method (e.g., POST, GET, HEAD)
httpMethod = event.httpMethod;
// TODO: Put your application logic here...
let params = {
Item:{
"prop1":0,
"prop2":"text"
},
TableName:"testTable"
};
docClient.put(params, function(data, err){
if(err)
responseCode = 500;
else
{
responseCode = 200;
context.succeed(data);
}
});
// For demonstration purposes, we'll just echo these values back to the client
var responseBody = {
requestBody : requestBody,
pathParams : pathParams,
queryStringParams : queryStringParams,
headerParams : headerParams,
stage : stage,
stageVariables : stageVariables,
cognitoIdentityId : cognitoIdentityId,
httpMethod : httpMethod,
sourceIp : sourceIp,
userAgent : userAgent,
requestId : requestId,
resourcePath : resourcePath
};
var response = {
statusCode: responseCode,
headers: {
"x-custom-header" : "custom header value"
},
body: JSON.stringify(responseBody)
};
console.log("response: " + JSON.stringify(response))
context.succeed(response);
};
this doesn't put the item to the table for some reason.
I gave the necessary permissions using the roles part, anything I am missing?
**responseCode is only for testing purposes.
Edit:
tried AWS node.js lambda request dynamodb but no response (no err, no return data) and doesn't work either.
Edit2:
Added the full handler code. (it the default generated code when creating first AWS lambda).
I have refactored some bits of your code to look much simpler and use async/await (make sure to select Node 8.10 as the running environment for your function) instead of callbacks. I also got rid of the context and callback parameters, as they were used for older versions of NodeJS. Once you're using Node 8+, async/await should be the default option.
Also, it is possible to chain a .promise() on docClient.putItem, so you can easily await on it, making your code way simpler. I have left only the DynamoDB part (which is what is relevant to your question)
'use strict';
console.log("Loading function");
const AWS = require('aws-sdk');
const docClient = new AWS.DynamoDB.DocumentClient({region:process.env.MOBILE_HUB_PROJECT_REGION});
exports.handler = async (event) => {
let params = {
Item:{
"prop0":1,
"prop2":"text"
},
TableName:"testTable"
};
try {
await docClient.put(params).promise();
} catch (e) {
console.log(e)
return {
messsage: e.message
}
}
return { message: 'Data inserted successfully' };
};
Things to keep in mind if still it does not work:
Make sure your Lambda function has the right permissions to insert items on DynamoDB (AmazonDynamoDBFullAccess will do it)
You ALWAYS have to provide the partition key when inserting items to DynamoDB. On your example, the JSON only has two properties: prop1 and prop2. If none of them are the partition key, your code will certainly fail.
Make sure you table also exists
If you code fails, just check CloudWatch logs as any exception is now captured and printed out on the console.
The reason why no data is written in the table is because the call to DynamoDB put is asynchronous and will return by calling your callback. But during that time, the rest of the code continues to execute and your function eventually finish before the call to DynamoDB has a chance to complete.
You can use the await / async keywords to make your code sychronous :
async function writeToDynamoDB(params) {
return new Promise((resolve,reject) => {
docClient.put(params, function(data, err){
if(err)
reject(500);
else
resolve(data);
});
});
}
let params = ...
var data = await writeToDynamoDB(params)
You can find sample code I wrote (in Typescript) at https://github.com/sebsto/maxi80-alexa/blob/master/lambda/src/DDBController.ts

Download file from Amazon S3 to lambda and extract it [duplicate]

Good day guys.
I have a simple question: How do I download an image from a S3 bucket to Lambda function temp folder for processing? Basically, I need to attach it to an email (this I can do when testing locally).
I have tried:
s3.download_file(bucket, key, '/tmp/image.png')
as well as (not sure which parameters will help me get the job done):
s3.getObject(params, (err, data) => {
if (err) {
console.log(err);
const message = `Error getting object ${key} from bucket ${bucket}.`;
console.log(message);
callback(message);
} else {
console.log('CONTENT TYPE:', data.ContentType);
callback(null, data.ContentType);
}
});
Like I said, simple question, which for some reason I can't find a solution for.
Thanks!
You can get the image using the aws s3 api, then write it to the tmp folder using fs.
var params = { Bucket: "BUCKET_NAME", Key: "OBJECT_KEY" };
s3.getObject(params, function(err, data){ if (err) {
console.error(err.code, "-", err.message);
return callback(err); }
fs.writeFile('/tmp/filename', data.Body, function(err){
if(err)
console.log(err.code, "-", err.message);
return callback(err);
});
});
Out of curiousity, why do you need to write the file in order to attach it? It seems kind of redundant to write the file to disk so that you can then read it from disk
If you're writing it straight to the filesystem you can also do it with streams. It may be a little faster/more memory friendly, especially in a memory-constrained environment like Lambda.
var fs = require('fs');
var path = require('path');
var params = {
Bucket: "mybucket",
Key: "image.png"
};
var tempFileName = path.join('/tmp', 'downloadedimage.png');
var tempFile = fs.createWriteStream(tempFileName);
s3.getObject(params).createReadStream().pipe(tempFile);
// Using NodeJS version 10.0 or later and promises
const fsPromise = require('fs').promises;
try {
const params = {
Bucket: 's3Bucket',
Key: 'file.txt',
};
const data = await s3.getObject(params).promise();
await fsPromise.writeFile('/tmp/file.txt', data.Body);
} catch(err) {
console.log(err);
}
I was having the same problem, and the issue was that I was using Runtime.NODEJS_12_X in my AWS lambda.
When I switched over to NODEJS_14_X it started working for me :').
Also
The /tmp is required. It will directly write to /tmp/file.ext.