Unable to upload files to S3 , Access denied - amazon-web-services

I am running an EC2 instance (with an IAM role which got AmazonS3FullAccess), now I am running a nodejs server in it and trying to upload a file to s3 bucket (public access) but getting Access Denied 403 Error.
Since the EC2 got S3 access, didn't provide accessKey/secret in node
https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-node-credentials-iam.html
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const params = {
Bucket: 'sample_name', // pass your bucket name
Key: 'filename',
Body: "<p>Hey</p>",
ContentDisposition: 'inline',
ContentType: 'text/html',
};
s3.upload(params, function (s3Err, data) {
if(s3Err) throw s3Err;
console.log(data)
})
could someone please help me on this?
Thanks in advance

Go to your bucket permissions and check if there are any permissions there or not. If not then add these, it might work!
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket_name",
"arn:aws:s3:::bucket_name/*"
]
}
]
}

Related

Identity Pool Role Can't Access S3 Bucket Access Point

Summary: I am using AWS Amplify Auth class with a pre-configured Cognito User Pool for authentication. After authentication, I am using the Cognito ID token to fetch identity pool credentials (using AWS CredentialProviders SDK) whose assumed role is given access to an S3 access point. I then attempt to fetch a known object from the bucket's access point using the AWS S3 SDK. The problem is that the request returns a response of 403 Forbidden instead of successfully getting the object, despite my role policy and bucket (access point) policy allowing the s3:GetObject action on the resource.
I am assuming something is wrong with the way my policies are set up. Code snippets below.
I am also concerned I'm not getting the right role back from the credentials provider, but I don't allow unauthenticated roles on the identity pool so I am not sure, and I don't know how to verify the role being sent back in the credentials' session token to check.
I also may not be configuring the sdk client objects properly, but I followed the documentation provided to the best of my understanding from the documentation (I am using AWS SDK v3, not v2, so slightly different syntax and uses modular imports)
Backend Configurations - IAM
Identity Pool: Authenticated Role Trust Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Action": [
"sts:AssumeRoleWithWebIdentity",
"sts:TagSession"
],
"Condition": {
"StringEquals": {
"cognito-identity.amazonaws.com:aud": "<MY_IDENTITY_POOL_ID>"
},
"ForAnyValue:StringLike": {
"cognito-identity.amazonaws.com:amr": "authenticated"
}
}
}
]
}
Identity Pool: Authenticated Role S3 Access Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::<MY_ACCESS_POINT_NAME>/object/*"
}
]
}
Backend Configurations - S3
S3 Bucket and Access Points: Block All Public Access
S3 Bucket CORS Policy:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 300
}
]
S3 Bucket Policy (Delegates Access Control to Access Points):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DelegateAccessControlToAccessPoints",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "*",
"Resource": [
"arn:aws:s3:::<MY_BUCKET_NAME>",
"arn:aws:s3:::<MY_BUCKET_NAME>/*"
],
"Condition": {
"StringEquals": {
"s3:DataAccessPointAccount": "<MY_ACCT_ID>"
}
}
}
]
}
Access Point Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessPointToGetObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<ACCT_ID>:role/<MY_IDENTITY_POOL_AUTH_ROLE_NAME>"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:<REGION>:<ACCT_ID>:accesspoint/<MY_ACCESS_POINT_NAME>/object/*"
}
]
}
Front End AuthN & AuthZ
Amplify Configuration of User Pool Auth
Amplify.configure({
Auth: {
region: '<REGION>',
userPoolId: '<MY_USER_POOL_ID>',
userPoolWebClientId: '<MY_USER_POOL_APP_CLIENT_ID>'
}
})
User AuthZ process:
On user login event, call Amplify's Auth.signIn() which returns type CognitoUser:
// Log in user (error checking ommitted here for post)
const CognitoUser = await Auth.signIn(email, secret);
// Get ID Token JWT
const CognitoIdToken = CognitoUser.signInUserSession.getIdToken().getJwtToken();
// Use #aws-sdk/credentials-provider to get Identity Pool Credentials
const credentials = fromCognitoIdentityPool({
clientConfig: { region: '<REGION>' },
identityPoolId: '<MY_IDENTITY_POOL_ID>',
logins: {
'cognito-idp.<REGION>.amazonaws.com/<MY_USER_POOL_ID>': CognitoIdToken
}
})
// Create S3 SDK Client
client = new S3Client({
region: '<REGION>',
credentials
})
// Format S3 GetObjectCommand parameters for object to get from access point
const s3params = {
Bucket: '<MY_ACCESS_POINT_ARN>',
Key: '<MY_OBJECT_KEY>'
}
// Create S3 client command object
const getObjectCommand = new GetObjectCommand(s3params);
// Get object from access point (execute command)
const response = await client.send(getObjectCommand); // -> 403 FORBIDDEN

Uploading to AWS S3 bucket from a profile in a different environment

I have access to one of two AWS environments and I've created a protected S3 bucket in it to upload files to from an account in the one that I do not. The environment and the account that I don't have access to are what a project's CI uses.
environment I have access to: env1
environment I do not have access to: env2
account I do not have access to: user/ci
bucket name: content
S3 bucket policy:
{
"Version": "2008-10-17",
"Id": "PolicyForCloudFrontPrivateContent",
"Statement": [
{
...
},
{
"Sid": "Allow access to bucket from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:GetBucketLocation",
"s3:ListBucket*"
],
"Resource": "arn:aws:s3:::content"
},
{
"Sid": "Allow access to bucket items from profile in env1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111122223333:user/ci"
},
"Action": [
"s3:Get*",
"s3:PutObject",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
From inside a container that's configured for env1 and user/ci I'm testing with the command
aws s3 sync content/ s3://content/
and I get the error:
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I have two questions:
Am I even using the correct aws command to upload the data to the bucket?
Am I missing something from my bucket policy?
For the latter, I've basically followed what a load of examples and answers online have suggested.
To test your policy, I did the following:
Created an IAM User with no policies
Created an Amazon S3 bucket
Attached your Bucket Policy to the bucket, and updated the ARN and bucket name
Tested access to the bucket with:
aws s3 ls s3://bucketname
aws s3 sync folder/ s3://bucketname/folder/
It worked fine.
Therefore, the policy you display appears to be giving all necessary permissions. It is possible that you have something else that is Denying access on the bucket.
The solution was to given the ACL
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::content",
"arn:aws:s3:::content/*"
]
}
]
}
to user/ci in env1.

Amazon S3 VPC endpoint access issue

Do we need to make the S3 bucket public? if we want to use a VPC endpoint to access the bucket. The bucket is private and I have a bucket policy as follows.
{
"Version": "2012-10-17",
"Id": "Policy1415115909152",
"Statement": [
{
"Sid": "Access-to-specific-VPCE-only",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::bucket-name",
"arn:aws:s3:::bucket-name/*"
],
"Condition": {
"StringEquals": {
"aws:sourceVpce": "vpce-XXXXXXXXX"
}
}
}
]
}
I am getting the following error while accessing it
403 Forbidden
Code: AccessDenied
Message: Access Denied
RequestId: 3B5263AFE5F08F7D
HostId: M2+BaRG/GqiasUSkPo9rC46aC84pmZHNcbSnA2UcWcHxWntFRWjcli7VdN0wLpnsSZgK659008Y=
I have enabled static website hosting on the bucket, Idea was to access it privately in the VPC.

Amazon AWS S3 bucket anonymous upload using curl

I'm trying to set up an S3 bucket to accept anonymous uploads while allowing the bucket owner full rights and preventing public read access. Following the code from here I've set up the bucket policy below. I'd like to use curl to upload to the bucket, but all I'm getting is
Access Denied
Here's the bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-anon-put",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::[mybucket]/uploads/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
{
"Sid": "deny-other-actions",
"Effect": "Deny",
"NotPrincipal": {
"AWS": "arn:aws:iam::[myid]:root"
},
"NotAction": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::[mybucket]/*"
}
]
}
And the curl POST:
curl --request PUT --upload-file "thefile.gif" -k https://[mybucket].s3.amazonaws.com/uploads/
Anonymous uploads are a bad idea, but at least this policy constraint requires that the uploader give you control of the object:
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
It's not intuitively obvious, but owning a bucket doesn't mean you own the objects. If they are not uploaded with credentials from your account, then you don't own them. You pay for them, of course, but if an object is uploaded into your bucket by another account or by the anonymous user, the only privilege you may end up with on that object is that you can delete it -- you can end up with objects you can't download or copy, just delete.
With this policy in place, the uploads have to comply with the policy, setting the object ACL to give you control:
curl ... -H 'x-amz-acl: bucket-owner-full-control'

Grant EC2 instance access to S3 Bucket

I want to grant my ec2 instance access to an s3 bucket.
On this ec2 instance, a container with my application is launched. Now I don't get permission on the s3 bucket.
This is my bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::714656454815:role/ecsInstanceRole"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "private-ip/32"
}
}
}
]
}
But it doesn't work until I give the bucket the permission for everyone to access it.
I try to curl the file in the s3 bucket from inside the ec2 instance but this doesn't work either.
at least of now, 2019, there is a much easier and cleaner way to do it (the credentials never have to be stored in the instance, instead it can query them automatically):
create an IAM Role for your instance and assign it
create a policy to grant access to your s3 bucket
assign the policy to the instance's IAM role
upload/download objects e.g. via aws cli for s3 - cp e.g. aws s3 cp <S3Uri> <LocalPath>
#2: An example of a JSON Policy to Allow Read and Write Access to Objects in an S3 Bucket is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::bucket-name"]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": ["arn:aws:s3:::bucket-name/*"]
}
]
}
You have to adjust allowed actions, and replace "bucket-name"
There is no direct way of granting "EC2" instance access to AWS server, but you can try the following.
Create a new user in AWS IAM, and download the credentials file.
This user will represent your EC2 server.
Provide the user with permissions to your S3 Bucket.
Next, place the credentials file in the following location:-
EC2 - Windows Instance:
a. Place the credentials file anywhere you wish. (e.g. C:/credentials)
b. Create an environment variable AWS_CREDENTIAL_PROFILES_FILE and put the value as the path where you put your credentials file (e.g. C:/credentials)
EC2 - Linux Instance
a. Follow steps from windows instance
b. Create a folder .aws inside your app-server's root folder (e.g. /usr/share/tomcat6).
c. Create a symmlink between your environment variable and your .aws folder
sudo ln -s $AWS_CREDENTIAL_PROFILES_FILE /usr/share/tomcat6/.aws/credentials
Now that your credentials file is placed, you can use Java code to access the bucket.
NOTE: AWS-SDK libraries are required for this
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
LOG.error("Unable to load credentials " + e);
failureMsg = "Cannot connect to file server.";
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (environment variable : AWS_CREDENTIAL_PROFILES_FILE), and is in valid format.",
e);
}
AmazonS3 s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest2);
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest().withBucketName(bucketName).withPrefix(prefix));
Where bucketName = [Your Bucket Name]
and prefix = [your folder structure inside your bucket, where your file(s) are contained]
Hope that helps.
Also, if you are not using Java, you can check out AWS-SDKs in other programming languages too.
I found it out....
It only works with the public IP from the ec2 instance.
Try this:
{
"Version": "2012-10-17",
"Id": "Policy1462808223348",
"Statement": [
{
"Sid": "Stmt1462808220978",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucket-name/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": "yourIp/24"
}
}
}
]
}
I faced the same problem. I finally resolved it by creating an access-point for the bucket in question using AWS CLI see https://docs.aws.amazon.com/AmazonS3/latest/dev/creating-access-points.html and I then created a bucket policy like following
{
"Version": "2012-10-17",
"Id": "Policy1583357393961",
"Statement": [
{
"Sid": "Stmt1583357315674",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::<your-bucket>"
},
{
"Sid": "Stmt1583357391961",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/ecsInstanceRole"
},
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<your-bucket>/*"
}
]
}
Please make sure you are using a newer version of aws cli (1.11.xxx didn't work for me). I finally installed the version 2 of cli to get this to work.