I am trying to upload the image from the device to s3 directly. I am able to read the image metadata and sending it to the server to generate the pre-signed url for the aws s3. I also have the pre-signed url to with I want to upload the file/image using axios but somehow the image/file is not getting uploaded. Here is my code.
Image data (read by the ImagePicker)
data: "" // image raw data
fileName: "acx.jpg"
fileSize: ""
uri: ""
path: ""
Code for sending the selected image to aws s3.
const options = { headers: { 'Content-Type': fileType}};
axios.put(res.data.signedRequest, data , options);
I'm getting the following respose.
res = {
config:
data: ""
status: 200
StatusText: undefined
...
}
So what should I pass as data in the axios request?
Have you explored this plugin ? It would make the process a lot easier. You could then try
upload = () => {
const file = {
uri: this.state.imageuri,
name: acx.jpg,
type: "image/jpeg"
};
const options = {
keyPrefix: "ts/",
bucket: "celeb-c4u",
region: "eu-west-1",
accessKey: "AKIAI2NHLR7A5W2R3OLA",
secretKey: "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",
successActionStatus: 201
};
return RNS3.put(file, options)
.then(response => {
if (response.status !== 201)
throw new Error("Failed to upload image to S3");
else {
console.log(
"Successfully uploaded image to s3. s3 bucket url: ",
response.body.postResponse.location
);
this.setState({
url: response.body.postResponse.location
});
}
})
.catch(error => {
console.log(error);
});
};
Related
When user send a file or any data by Presigned URL to S3 Bucket. In between no restriction. So User Can send anything by Presigned URL to S3 Bucket.
But I want check data between Presigned URL and S3 Bucket which data user send.
I am using serverless framework.
please help me, Thanks in Advance.
My lambda function code here
module.exports.uploadLarge = async (event) => {
console.log({ event })
try {
const body = JSON.parse(event.body);
console.log({ body })
const action = body.action;
const type = body.type;
const key = body.key;
const params = {
Bucket: BucketName,
Key: key,
// ContentType: type,
Expires: 10000,
}
if (action === "putObject") {
params.ContentType = type;
// params.Expires = 20000
}
console.log({ params })
// const url = S3.getSignedUrlPromise(action, params);
const u = S3.getSignedUrl(action, params);
console.log({ u });
// console.log({ url });
return {
statusCode: 200,
body: JSON.stringify({ u }),
headers: {
// "Content-Type": "application/json"
'Access-Control-Allow-Origin': '*',
}
}
} catch (err) {
return {
statusCode: 500,
headers: {
"Content-Type": "application/json",
"Access-Control-Allow-Origin": "*"
},
body: JSON.stringify(err)
}
}
}
But I want check data between Presigned URL and S3 Bucket which data user send.
Its not possible with your current design. You can only perform a check after the user have uploaded the file. For example, setup an S3 trigger for PutObject event which will trigger a lambda function to verify the file. Otherwise, you have to change your architecture, and put some proxy between users and S3. For example, Apigateway, or CloudFront, or custom application.
I'm using https://www.npmjs.com/package/aws-s3 and https://www.npmjs.com/package/filepond to upload images to my AWS S3 bucket. I've got it running, but I'm wondering if there's an easy way to show all images in the AWS S3 bucket. I don't want to save each link to a image in a database and then run through that. Any suggestions?
Switched to package https://www.npmjs.com/package/aws-sdk
Added some scripting:
<script>
import AWS from 'aws-sdk'
AWS.config.update({
credentials: new AWS.CognitoIdentityCredentials({
IdentityPoolId: '/******'
}),
region: '/******'
});
const s3 = new AWS.S3({
apiVersion: '2006-03-01',
params: { Bucket: 'gamesnap' }
})
export default {
data() {
return {
baseUrl: 'https://******.s3.eu-central-1.amazonaws.com/',
images: []
}
},
mounted() {
s3.listObjectsV2((err, data) => {
if (err) {
console.log("Error: " , err);
} else {
this.images = data.Contents
}
});
}
}
</script>
I am working on an app that is deployed to AWS EC2 (Both client and server as a separate instance). My app uploads users' images to the s3 bucket.
I just added domains to both instances for https certification to both client and rest API
and since then I am getting this error while trying to save files to my S3 Bucket:
code: "AccessDenied"
extendedRequestId: "*****"
message: "Access Denied"
region: null
requestId: "****"
retryDelay: 67.72439862213535
retryable: false
statusCode: 403
time: "2020-09-12T13:42:29.739Z"
message: "Access Denied"
I have made this bucket public, even then it's not working.
here is my code
require('dotenv').config();
let multer = require('multer');
let AWS = require('aws-sdk');
let { uuid } = require('uuidv4');
let s3 = new AWS.S3({
accessKeyId: process.env.AWS_ID,
secretAccessKey: process.env.AWS_SECRET,
});
let storage = multer.memoryStorage({
destination: function (req, file, callback) {
callback(null, '');
},
});
let multiUpload = multer({ storage }).fields([
{ name: 'profile', maxCount: 1 },
{ name: 'gallery' },
]);
router.post('/', auth.required, multiUpload, async function (req, res, next) {
var profile = new Profile();
profile.userId = req.payload.id;
if (typeof req.files.profile !== 'undefined') {
let myImage = req.files.profile[0].originalname.split('.');
let fileType = myImage[myImage.length - 1];
let params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: `${uuid()}.${fileType}`,
Body: req.files.profile[0].buffer,
ContentType: 'image/jpeg',
ACL: 'public-read',
};
let data = await s3.upload(params).promise();
if (!data.Location) res.sendStatus(500);
profile.profileImage.url = data.Location;
profile.profileImage.imageId = data.key;
}
if (typeof req.files.gallery !== 'undefined') {
let galleryImageList = [];
for (let i = 0; i < req.files.gallery.length; i++) {
let myImage = req.files.gallery[i].originalname.split('.');
let fileType = myImage[myImage.length - 1];
let params = {
Bucket: process.env.AWS_BUCKET_NAME,
Key: `${uuid()}.${fileType}`,
Body: req.files.gallery[i].buffer,
ContentType: 'image/jpeg',
ACL: 'public-read',
};
let data = await s3.upload(params).promise();
if (!data.Location) res.sendStatus(500);
let galleryItem = {
url: data.Location,
imageId: data.key,
};
galleryImageList.push(galleryItem);
}
profile.gallery = galleryImageList;
}
profile
.save()
.then(function () {
return res.json({ profile: profile.toProfileJSONFor() });
})
.catch(next);
});
I apparently don't have enough reputation to make a comment, but you likely should include whether you're getting Access Denied from reading from the bucket or writing to the bucket or both, and you should include your code snippets that read/write from the bucket. Also, you should include what you mean by "Added domains to both instances for https certification" because I wouldn't think that should be necessary
Editing my response since you included your code:
It looks like you are using Keys to upload your files:
let s3 = new AWS.S3({
accessKeyId: process.env.AWS_ID,
secretAccessKey: process.env.AWS_SECRET,
});
let data = await s3.upload(params).promise();
So, if you are getting Access Denied on your writes, you should look at your keys permissions.
I receive <Code>InternalError</Code><Message>We encountered an internal error. Please try again.</Message> when doing a POST form data upload on to an s3 bucket.
S3 configuration:
const attachmentBucket = new Bucket(this, 'caS3Bucket', {
bucketName: environmentName + '.caattachments',
cors: [{
allowedMethods: [HttpMethods.GET, HttpMethods.POST],
allowedOrigins: ['*'],
allowedHeaders: ['*'],
maxAge: 3000
} as CorsRule]
} as BucketProps);
Pre-signing upload url through a lambda:
const params = {
Bucket: process.env.S3_BUCKET!.split(':')[5],
Fields: {
key: payload.path,
acl: 'public-read'
},
Expires: 3600
};
const postData = await new Promise(resolve => {
s3.createPresignedPost(params, (err, data) => {
resolve(data);
});
}) as AWS.S3.PresignedPost;
I append all parameters in the postData.fields to the input form with the file. Is there any way to debug this?
The issue was that the form was missing the 'Policy' field. I wish aws errors were more descriptive. Final working form fields looks like this:
const formData: FormData = new FormData();
formData.append('key', uploadData.fields.key);
formData.append('acl', uploadData.fields.acl);
formData.append('bucket', uploadData.fields.bucket);
formData.append('X-Amz-Algorithm', uploadData.fields.algorithm);
formData.append('X-Amz-Credential', uploadData.fields.credential);
formData.append('X-Amz-Date', uploadData.fields.date);
formData.append('X-Amz-Security-Token', uploadData.fields.token);
formData.append('Policy', uploadData.fields.policy);
formData.append('X-Amz-Signature', uploadData.fields.signature);
formData.append('file', file, file.name);
I just used the following code to get an S3 pre-signed URL:
import AWS from 'aws-sdk';
AWS.config.update({
accessKeyId: process.env.AWS_S3_KEY,
secretAccessKey: process.env.AWS_S3_SECRET,
region: process.env.AWS_S3_REGION
});
const s3 = new AWS.S3({
region: process.env.AWS_S3_REGION,
signatureVersion: 'v4'
});
export const s3Auth = (req, res) => {
s3.getSignedUrl(
'putObject',
{
Bucket: 'bucket',
Key: 'mykey',
ContentType: 'multipart/form-data',
Expires: 60
},
(error, url) => {
if (!error && url) {
res.send({
url
});
} else {
res.status(500);
res.send({ error: 'AWS error!' });
throw error;
}
}
);
};
As you can see I set my region in both AWS and AWS.S3 objects. But still, the URL returned by this function does not include the correct region. It returns us-east-1 instead of ap-southeast-1 which I set. The environment variables are coming correctly. I tested them. Any idea what's happening?
Sample url:
https://{BUCKET_NAME}.s3.amazonaws.com/{FOLDER}?Content-Type=multipart%2Fform-data&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential={KEY}%2F20171023%2Fus-east-1%2Fs3%2Faws4_request...