How can I upload images to s3 in react native? - amazon-web-services

I am trying to upload local images from my react native app (i'm using expo) to an s3 bucket, but nothing seems to work.
I'm am using react-native-aws3 library, but Promise.then never gets called. It doesn't throw any error either.
This is the code:
const options = {
bucket: BUCKET_NAME,
region: REGION,
accessKey: AWS_USER_KEY,
secretKey: AWS_PRIVATE_KEY,
successActionStatus: 201
}
export async function uploadImage(imageUri, imageName, imageType) {
const file = {
uri: imageUri,
name: imageName,
type: "image/" + imageType
}
RNS3.put(file, options).then( (response) => {
console.log("done")
console.log(response)
console.log(response.status)
})
}
This is the Bucket Policy:
{
"Version": "2012-10-17",
"Id": "Policy1648118554991",
"Statement": [
{
"Sid": "Stmt1648118551643",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
This is the IAM user policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::BUCKET_NAME/*"
}
]
}
I tried following every tutorial I could find, but nothing seems to work. Do I have to do something else, like server side? or is there something wrong in the policies?

To upload content to an Amazon S3 bucket, use the official AWS SDK for JavaScript. To use this SDK with React, look at this doc topic:
Getting started in React Native
To work with Amazon S3, see:
Amazon S3 examples

code mentioned here works in "react-native": "0.68.2"
your react native camera collects image uri something like this file:///storage/emulated/0/Android/data/com.lobb.agent/files/Pictures/image-e4047ca8-5df9-489d-ac86-2bcdf24cd3436728860500254451478.jpg
following code does these steps
accepts your image uri
convert the image uri in to blob,
makes put request to s3 bucket
const handleImageUpload = async () => {
try {
const resp = await fetch(yourImageURI); //
const imageBody = await resp.blob(); // conv uri to blob
const result = await fetch(yourSignedURL, {
method: 'PUT',
body: imageBody,
});
console.log('result:', result);
} catch (error) {
console.log('error upload :', error);
}
};

Related

Identity Pool Role Can't Access S3 Bucket Access Point

Summary: I am using AWS Amplify Auth class with a pre-configured Cognito User Pool for authentication. After authentication, I am using the Cognito ID token to fetch identity pool credentials (using AWS CredentialProviders SDK) whose assumed role is given access to an S3 access point. I then attempt to fetch a known object from the bucket's access point using the AWS S3 SDK. The problem is that the request returns a response of 403 Forbidden instead of successfully getting the object, despite my role policy and bucket (access point) policy allowing the s3:GetObject action on the resource.
I am assuming something is wrong with the way my policies are set up. Code snippets below.
I am also concerned I'm not getting the right role back from the credentials provider, but I don't allow unauthenticated roles on the identity pool so I am not sure, and I don't know how to verify the role being sent back in the credentials' session token to check.
I also may not be configuring the sdk client objects properly, but I followed the documentation provided to the best of my understanding from the documentation (I am using AWS SDK v3, not v2, so slightly different syntax and uses modular imports)
Backend Configurations - IAM
Identity Pool: Authenticated Role Trust Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": [
"sts:AssumeRoleWithWebIdentity",
"sts:TagSession"
],
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "<MY_IDENTITY_POOL_ID>"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "authenticated"
}
}
}
]
}
Identity Pool: Authenticated Role S3 Access Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::<MY_ACCESS_POINT_NAME>/object/*"
}
]
}
Backend Configurations - S3
S3 Bucket and Access Points: Block All Public Access
S3 Bucket CORS Policy:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 300
}
]
S3 Bucket Policy (Delegates Access Control to Access Points):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateAccessControlToAccessPoints",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": [
"arn:aws:s3:::<MY_BUCKET_NAME>",
"arn:aws:s3:::<MY_BUCKET_NAME>/*"
],
"Condition": {
"StringEquals": {
"s3:DataAccessPointAccount": "<MY_ACCT_ID>"
}
}
}
]
}
Access Point Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessPointToGetObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACCT_ID>:role/<MY_IDENTITY_POOL_AUTH_ROLE_NAME>"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:<REGION>:<ACCT_ID>:accesspoint/<MY_ACCESS_POINT_NAME>/object/*"
}
]
}
Front End AuthN & AuthZ
Amplify Configuration of User Pool Auth
Amplify.configure({
Auth: {
region: '<REGION>',
userPoolId: '<MY_USER_POOL_ID>',
userPoolWebClientId: '<MY_USER_POOL_APP_CLIENT_ID>'
}
})
User AuthZ process:
On user login event, call Amplify's Auth.signIn() which returns type CognitoUser:
// Log in user (error checking ommitted here for post)
const CognitoUser = await Auth.signIn(email, secret);
// Get ID Token JWT
const CognitoIdToken = CognitoUser.signInUserSession.getIdToken().getJwtToken();
// Use #aws-sdk/credentials-provider to get Identity Pool Credentials
const credentials = fromCognitoIdentityPool({
clientConfig: { region: '<REGION>' },
identityPoolId: '<MY_IDENTITY_POOL_ID>',
logins: {
'cognito-idp.<REGION>.amazonaws.com/<MY_USER_POOL_ID>': CognitoIdToken
}
})
// Create S3 SDK Client
client = new S3Client({
region: '<REGION>',
credentials
})
// Format S3 GetObjectCommand parameters for object to get from access point
const s3params = {
Bucket: '<MY_ACCESS_POINT_ARN>',
Key: '<MY_OBJECT_KEY>'
}
// Create S3 client command object
const getObjectCommand = new GetObjectCommand(s3params);
// Get object from access point (execute command)
const response = await client.send(getObjectCommand); // -> 403 FORBIDDEN

Unable to upload files to S3 , Access denied

I am running an EC2 instance (with an IAM role which got AmazonS3FullAccess), now I am running a nodejs server in it and trying to upload a file to s3 bucket (public access) but getting Access Denied 403 Error.
Since the EC2 got S3 access, didn't provide accessKey/secret in node
https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-node-credentials-iam.html
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const params = {
Bucket: 'sample_name', // pass your bucket name
Key: 'filename',
Body: "<p>Hey</p>",
ContentDisposition: 'inline',
ContentType: 'text/html',
};
s3.upload(params, function (s3Err, data) {
if(s3Err) throw s3Err;
console.log(data)
})
could someone please help me on this?
Thanks in advance
Go to your bucket permissions and check if there are any permissions there or not. If not then add these, it might work!
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket_name",
"arn:aws:s3:::bucket_name/*"
]
}
]
}

putObject to Public S3 Bucket

I have an API invoked lambda that generates signed getObject and putObject URLs. GET and PUT on my restricted bucket (contains zip files) work fine, but PUT on my public bucket (contains images) returns a "SignatureDoesNotMatch" error. There is no GET on the public bucket, images from that bucket are referenced directly. Do I need extra configuration to PUT on a public bucket? I've tried giving the most generous permissions I can think without luck.
EDIT: I did end up having to send in the specific image MIME type to the endpoint that generates the signed URL. image/* didn't work, unfortunately.
Signed URL Generation (called by getSignedImgUploadUrl)
let params = {
Bucket: "public-bucket",
Key: `${folder}/${key}.jpg`, // Ideally without extension
Expires: 30,
ContentType: "image/jpeg", // Ideally image/*
ACL: "public-read" // Tried with and without this
};
let url = s3.getSignedUrl("putObject", params);
let result = {
signedUrl: url,
key: key
};
return result;
Use Signed URL
public uploadImg(folder: string, file: any, key: string): Observable<any> {
return this._spinnerService.spinObservable(
new Observable(subscriber => {
this.getSignedImgUploadUrl(folder, key)
.subscribe(result => {
// put to signedUrl fails with 403 SignatureDoesNotMatch
this._httpClient.put(result["signedUrl"], file, { headers: { "x-amz-acl": "public-read" } })
.subscribe(() => {
subscriber.next(result["key"]);
subscriber.complete();
}, err => {
console.log(err);
subscriber.error(err);
});
}, err => {
console.log(err);
subscriber.error(err);
});
}));
}
Lambda Role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::restricted-bucket/*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "s3:*", (Ideally just getObject/putObject)
"Resource": "arn:aws:s3:::public-bucket/*"
}
]
}
Public Bucket Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::public-bucket/*"
},
// Also tried adding s3:* for the lambda role without luck
{
"Sid": "Stmt1624999949645",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::account:role/service-role/lambda-role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::public-bucket/*"
}
]
}
According to OP's comment, explicitly sending Content-Type HTTP header worked. The reason why this header is sometimes required is because the HTTP client cannot correctly infer the MIME type from the PUT payload.

AWS S3 Stop Download Image automatically when browsed via URL

I am uploading images to my S3 bucket via my Node.js Application. I have the following bucket policy.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket_name/*"
}
]
}
When I go to the link of the file lets say its an image. It automatically downloads as a file and does not render in the browser. Any idea how to make it display in the browser without automatically force downloading the image in the browser?
Explicitly set the Content-Type of the object with image/<appropriate-subtype>.
var params = { Bucket: '', Key: '', ContentType: 'image/png', ... }
The default Content-Type would be application/octet-stream, thus instead of rendering in the browser it is being downloaded.

Unable to connect kinesis trigger to lambda. "Cannot access stream"- IAM role is configured properly. How do I debug this?

I am trying to connect a kinesis trigger to a lambda function, but keep getting the same error no matter how I configure the IAM. I've also tried setting the IAM role to "*" in most fields. How can I determine what's preventing the trigger from connecting if the IAM role is configured the way the error has requested?
Here is the error:
There was an error creating the trigger: Cannot access stream [arn]. Please ensure the role can perform the GetRecords, GetShardIterator, DescribeStream, and ListStreams Actions on your stream in IAM.
Here is my IAM role which can perform the GetRecords, GetShardIterator, DescribeStream, and ListStreams PLUS more:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1505763107000",
"Effect": "Allow",
"Action": [
"kinesis:CreateStream",
"kinesis:DescribeStream",
"kinesis:GetShardIterator",
"kinesis:GetRecords",
"kinesis:ListRecords",
"kinesis:PutRecord",
"kinesis:PutRecords"
],
"Resource": [
"arn:aws:kinesis:us-east-2:219021079475:stream/lead"
]
},
{
"Sid": "Stmt1505763184000",
"Effect": "Allow",
"Action": [
"cloudwatch:*"
],
"Resource": [
"*"
]
},
{
"Sid": "Stmt1505763256000",
"Effect": "Allow",
"Action": [
"lambda:*"
],
"Resource": [
"arn:aws:lambda:us-east-2:219021079475:function:logsKinesisData"
]
}
]
}
I've not written the lambda function beyond set up:
'use strict';
console.log('Loading function');
exports.handler = (event, context, callback) => {
//console.log('Received event:', JSON.stringify(event, null, 2));
event.Records.forEach((record) => {
// Kinesis data is base64 encoded so decode here
const payload = new Buffer(record.kinesis.data, 'base64').toString('ascii');
console.log('Decoded payload:', payload);
});
callback(null, `Successfully processed ${event.Records.length} records.`);
};