How to access S3 bucket with this strange plain text file? - amazon-web-services

I got text file to access S3 bucket like following:
arn:aws:iam::############:user/aaaaaaaa-aaaaaaaaa-aaa
User
aaaaaaaa-aaaaaaaaa-aaa
Access key ID
AAAAAAAAAAAAAAAAAAAA
Secret access key
AAAAAAAAAAA/AAAAAAAAAAAAAAAAAAAAAAAAAAAA
{
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::bbbbbbbb-bbbbbbbbb-bbbbbbb/*"
}
]
}
I have AWS account and can create my own buckets, but see no UI to acquire such files.
UPDATE
I issued
>aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
then I did
>aws configure
AWS Access Key ID [None]: AAAAAAAAAAAAAAAAAAAA
AWS Secret Access Key [None]: AAAAAAAAAAA/AAAAAAAAAAAAAAAAAAAAAAAAAAAA
Default region name [None]:
Default output format [None]:
and now
>aws s3 ls
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
Why? Why didn't I use neither User nor Resource values from my text file or how to use this data?
UPDATE 2
I tried
>aws s3 ls
>aws s3 ls s3://bbbbbbbb-bbbbbbbbb-bbbbbbb
>aws s3 ls bbbbbbbb-bbbbbbbbb-bbbbbbb
>aws s3 ls bbbbbbbb-bbbbbbbbb-bbbbbbb/*
>aws s3 ls s3:/bbbbbbbb-bbbbbbbbb-bbbbbbb
And got Access denied in all cases.

It appears that your System Administrators have created some configurations in AWS and they wanted to let you know what they have done. The file is a dump of information from various locations -- it is for your reference and is not for 'use' somewhere.
The first line is the Amazon Resource Name (ARN) that uniquely identifies you as a user. It can be used in security policies to grant you access to resources:
arn:aws:iam::############:user/aaaaaaaa-aaaaaaaaa-aaa
They are also telling you your Username:
User
aaaaaaaa-aaaaaaaaa-aaa
The Access Key and Secret Key can be used to identify yourself, as you have done with the AWS Command-Line Interface (CLI):
Access key ID
AAAAAAAAAAAAAAAAAAAA
Secret access key
AAAAAAAAAAA/AAAAAAAAAAAAAAAAAAAAAAAAAAAA
The next part is an IAM Policy:
{
"Statement": [
{
"Effect":"Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListAllMyBuckets",
"s3:GetBucketLocation",
"s3:ListBucket"
],
"Resource":"arn:aws:s3:::bbbbbbbb-bbbbbbbbb-bbbbbbb/*"
}
]
}
This policy states that you can perform the listed actions against the specified Amazon S3 bucket.
It's not a great policy however, because the last 3 actions actually apply to a bucket (or no bucket), so should be not used with a Resource statement that specifies bucket/*.
If you are trying to access information in Amazon S3 but receive Access Denied, then contact your System Administrator to update the policy to grant you access.

You have configured your credentials properly based on your update
in the question.
But you haven't specified a default region in the configuration.
Check with your admins what is the region for this S3 bucket. It
could be something like us-east-1 or us-west-2.
Once you have your bucket's region, you can issue a command as
below:
aws s3 ls <name of your bucket> --region us-east-1
The reason you are receiving access denied is you do not have access to other buckets, but only one of the buckets on S3. This is suggested by this line:
"Resource":"arn:aws:s3:::bbbbbbbb-bbbbbbbbb-bbbbbbb/*"
Where bbbbbbbb-bbbbbbbbb-bbbbbbb is name of you bucket.

You need to go to IAM to create a policy for your bucket. Than you need to add this policy to your users account and than you can access this bucket using your users AccessKey and SecretAccessKey

Related

S3 cross account permission (view via AWS UI and copy bucket content)

I'm trying to access (see it on my AWS console beside my own buckets) an external bucket ( bucket B ) and if possible copy it.
What permission (JSON file) do I need to ask from the owner of bucket B? is full read and full list permissions for my account enough? If I will receive the full read and the full list I will be able to see the bucket on my account under s3 buckets?
Example 2: Bucket owner granting cross-account bucket permissions - Amazon Simple Storage Service
Viewing / Downloading contents
The Amazon S3 management console only shows buckets in your own account.
However, you can 'cheat' and modify the URL to show another bucket for which you have access permission.
For example, when viewing the contents of a bucket in the S3 management console, the URL is:
https://us-east-1.console.aws.amazon.com/s3/buckets/BUCKET-NAME?region=ap-southeast-2&tab=objects
You can modify BUCKET-NAME to view a specific bucket.
Alternatively, you can access buckets via the AWS CLI, regardless of which account 'owns' the bucket, as long as you have sufficient permissions:
aws s3 ls s3://BUCKET-NAME
Required Permissions
The permissions you will need on the bucket depend totally on what you wish to do. If you want the ability to list the contents of the bucket, then you will need s3:ListBucket permission. If you want the ability to download an object, you will need s3:GetObject permission.
It would be something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:ListBucket",
"s3:GetObject"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::BUCKET-NAME",
"arn:aws:s3:::BUCKET-NAME/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::111122223333:user/YOUR-USER-NAME"
]
}
}
]
}
When granting access, the owner of Bucket B will need to grant permissions to your IAM User (in your own AWS Account). Therefore, you will need to give them the ARN of your own IAM User.

How do you allow granting public read access to objects uploaded to AWS S3?

I have created a policy that allows access to a single S3 bucket in my account. I then created a group that has only this policy and a user that is part of that group.
The user can view, delete and upload files to the bucket, as expected. However, the user does not seem to be able to grant public read access to uploaded files.
When the Grant public read access to this object(s) option is selected, the upload fails.
The bucket is hosting a static website and I want to allow the frontend developer to upload files and make them public.
The policy for the user role is below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::my-bucket"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
]
}
This is what happens when the IAM user tries to grant public access to the uploaded file:
The proxy error seems unrelated, but essentially the upload is stuck and nothing happens. If they don't select the Grant public access option, the upload goes through immediately (despite the proxy error showing up as well).
To reproduce your situation, I did the following:
Created a new Amazon S3 bucket with default settings (Block Public Access = On)
Created an IAM User (with no policies attached)
Created an IAM Group (with no policies attached)
Added the IAM User to the IAM Group
Attached your policy (from the Question) to the IAM Group (updating the bucket name) as an inline policy
Logged into the Amazon S3 management console as the new IAM User
At this point, the user received an Access Denied error because they were not permitted to list all Amazon S3 buckets. Thus, the console was not usable.
Instead, I ran this AWS CLI command:
aws s3 cp foo.txt s3://new-bucket/ --acl public-read
The result was:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
However, the operation succeeded with:
aws s3 cp foo.txt s3://new-bucket/
This means that the --acl is the component that was denied.
I then went to Block Public Access for the bucket and turned OFF the option called "Block public access to buckets and objects granted through new access control lists (ACLs)". My settings were:
I then ran this command again:
aws s3 cp foo.txt s3://new-bucket/ --acl public-read
It worked!
To verify this, I went back into Block Public Access and turned ON all options (via the top checkbox). I re-ran the command and it was Access Denied again, confirming that the cause was the Block Public Access setting.
Bottom line: Turn off the first Block Public Access setting.
You can do it through AWS CLI Update object's ACL
Option 1:
object that's already stored on Amazon S3, you can run this command to update the ACL for public read access:
aws s3api put-object-acl --bucket <<S3 Bucket Name>> --key <<object>> --acl public-read
Option 2:
Run this command to grant full control of the object to the AWS account owner and read access to everyone else:
aws s3api put-object-acl --bucket <<S3 Bucket Name>> --key <<object>> --grant-full-control emailaddress=<<Accountowneremail#emaildomain.com>> --grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers
I found that certain actions (like renaming an object) will fail when executed from the console (but will succeed from the CLI!) when ListAllMyBuckets is not granted for all s3 resources. Adding the following to the IAM policy resolved the issue:
{
"Sid": "AccessS3Console",
"Action": [
"s3:ListAllMyBuckets"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::*"
}
Some of the actions I tested that failed from the console but succeeded from CLI:
Renaming an object. The console displays "Error - Failed to rename the file to ". Workaround: deleting and re-uploading the object with a new name.
Uploading an object with "Grant public read access to this object(s)". The console's status bar shows that the operation is stuck in "in progress". Workaround: Uploading the object without granting public read access, and then right clicking on it and selecting "Make public".
I experienced these issues after following the instructions here
https://aws.amazon.com/premiumsupport/knowledge-center/s3-console-access-certain-bucket/ which describe how to restrict access to a single bucket (and preventing seeing the full list of buckets in the account). The post didn't mention the caveats.
To limit a user's Amazon S3 console access to only a certain bucket or
folder (prefix), change the following in the user's AWS Identity and
Access Management (IAM) permissions:
Remove permission to the s3:ListAllMyBuckets action.
Add permission to s3:ListBucket only for the bucket or folder that you want the user to access.
Note: To allow the user to upload and download objects from the bucket or folder, you must also include s3:PutObject and s3:GetObject.
Warning: After you change these permissions, the user gets an Access
Denied error when they access the main Amazon S3 console. The user
must access the bucket using a direct console link to the bucket or
folder.

How can I enable an ec2 instance to have private access to an S3 bucket?

First of all i'm aware of these questions:
Grant EC2 instance access to S3 Bucket
Can't access s3 bucket using IAM-role from an ec2-instance
Getting Access Denied when calling the PutObject operation with
bucket-level permission
but the solutions are not working for me.
I created a role "sample_role", attached the AmazonS3FullAccess-policy to it and assigned the role to the ec2-instance.
My bucket-policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::My-Account-ID:role/sample_role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_bucket/*"
}
]
}
On my ec2-instance, listing my buckets works fine, both from the command line (aws s3 ls) and from python script.
But when I try to upload a file test.txt to my bucket, I get AccessDenied:
import boto3
s3_client = boto3.client('s3')
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket('my_bucket')
with open('test.txt', "rb") as f:
s3_client.upload_fileobj(f, bucket.name, 'text.txt')
Error message:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Same happens when i just try to list the objects in my bucket. Command line aws s3api list-objects --my_bucket or python script:
import boto3
s3_resource = boto3.resource('s3')
bucket = s3_resource.Bucket('my_bucket')
for my_bucket_object in bucket.objects.all():
print(my_bucket_object)
Error message:
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
When I turn off "Block all public access" in my bucket settings and enable public access in my access control list, it obviously works. But I need to restrict access to the specified role.
What am I missing?
Thanks for your help!
It appears that your requirement is:
You have an Amazon S3 bucket (my_bucket)
You have an Amazon EC2 instance with an IAM Role attached
You want to allow applications running on that EC2 instance to access my_bucket
You do not want the bucket to be publicly accessible
I will also assume that you are not trying to deny other users access to the bucket if they have already been granted that access. You are purely wanting to Allow access to the EC2 instance, without needing to Deny access to other users/roles that might also have access.
You can do this by adding a policy to the IAM Role that is attached to the EC2 instance:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::my_bucket",
"arn:aws:s3:::my_bucket/*"
]
}
]
}
This grants ALL Amazon S3 permissions to the IAM Role for my_bucket. Note that some commands require permission on the bucket itself, while other commands require permission on the contents (/*) of the bucket.
I should also mention that granting s3:* is probably too generous, because it would allow the applications running on the instance to delete content and even delete the bucket, which is probably not what you wish to grant. If possible, limit the actions to only those that are necessary.
When I turn off "Block all public access" in my bucket settings and enable public access in my access control list, it obviously works.
Remove "enable public access" from this sentence and this will be your solution :-)
"Block all public access" blocks all public access and it doesn't matter what bucket policy you use. So uncheck this option and your bucket policy will start working as you planned.
So I found the problem.
The credentails of my ec2 instance were configured with the access key of a dev-user account to which the role was not assigned.
I found out by running aws sts get-caller-identity which returns the identity (e.g. IAM role) actually being used.
So it seems that the assigned role can be overwritten by the user identity, which makes sense.
To solve the problem, I simply undid the configuration by deleting the configuration file ~/.aws/credentials. After that the identity changed to the assigned role.

Access denied when trying to do AWS s3 ls using AWS cli

I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.

S3 to S3 transfer using different accounts?

I've been reading multiple posts like this one about how to transfer data with aws cli from one S3 bucket to another using different accounts but I am still unable to do so. I'm sure it's because I haven't fully grasp the concepts of account + permission settings in AWS yet (e.g. iam account vs access key).
I have a vendor that gave me a user called "Foo" and account number "123456789012" with 2 access keys to access their S3 bucket "SourceBucket" in eu-central-1. I created a profile on my machine with the access key provided by the vendor called "sourceProfile". I have my S3 called "DestinationBucket" in us-east-1 and I set the bucket policy to the following.
{
"Version": "2012-10-17",
"Id": "Policy12345678901234",
"Statement": [
{
"Sid": "Stmt1487222222222",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:user/Foo"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::DestinationBucket/",
"arn:aws:s3:::DestinationBucket/*"
]
}
]
}
Here comes the weird part. I am able to list the files and even download files from the "DestinationBucket" using the following command lines.
aws s3 ls s3://DestinationBucket --profile sourceProfile
aws s3 cp s3://DestinationBucket/test ./ --profile sourceProfile
But when I try to put copy anything to the "DestinationBucket" using the profile, I got Access Denied error.
aws s3 cp test s3://DestinationBucket --profile sourceProfile --region us-east-1
upload failed: ./test to s3://DestinationBucket/test An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Did I set up the bucket policy especially the list of action right? How could ls and cp from destination to local work but cp from local to destination bucket doesn't work?
Because AWS make it a way that parent account holder must do the delegation.
Actually, beside delegates access on to that particular access key user, you can choose to do replication on the bucket as stated here.