How to download from s3 bucket with ARN - amazon-web-services

I was give this info:
AWS s3 Bucket
ci****a.open
Amazon Resource Name (ARN)
arn:aws:s3:::ci****a.open
AWS Region
US West (Oregon) us-west-2
how am I supposed to download the folder without Access Key ID and Secret Access Key ?
I tried with CLI and it still ask me Access Key ID and Secret Access Key.
I usually use s3 browser, but it also ask for Access Key ID and Secret Access Key

I tried with CLI and it still ask me Access Key ID and Secret Access Key.
For CLI you have to use --no-sign-request for credentials to be skipped. This will only work if the objects and/or your bucket is public.
CLI S3 commands, such as cp require S3Url, not S3 arn:
s3://bucket-name
you can create it yourself from arn, as bucket-name will be in the arn. In your case it would be ci****a.open:
s3://ci****a.open
So you can try the following to copy everything to current working folder:
aws s3 cp s3://ci****a.open . --recursive --no-sign-request

Related

Accessing S3 bucket data from EC2 instance through IAM

So I have created an IAM user and added a permission to access S3 then I have created an EC2 instance and SSH'ed into the it.
After giving "aws s3 ls" command, the reply was
"Unable to locate credentials. You can configure credentials by running "aws configure".
so what's the difference between giving IAM credentials(Key and Key ID) using "aws configure" and editing the bucket policy to allow s3 access to my instance's public IP.
Even after editing the bucket policy(JSON) to allow S3 access to my instance's public IP why am I not able to access the s3 bucket unless I use "aws configure"(Key and Key ID)?
Please help! Thanks.
Since you are using EC2 you should really use EC2 Instance Profiles instead of running aws configure and hard-coding credentials in the file system.
As for the your question of S3 bucket policies versus IAM roles, here is the official documentation on that. They are two separate tools you would use in securing your AWS account.
As for your specific command that failed, note that the AWS CLI tool will always try to look for credentials by default. If you want it to skip looking for credentials you can pass the --no-sign-request argument.
However, if you were just running aws s3 ls then that was trying to list all the buckets in your account, which you would have to have IAM credentials for. Individual bucket policies would not be taken into account in that scenario.
If you were running aws s3 ls s3://bucketname then that may have worked as aws s3 ls s3://bucketname --no-sign-request.
When you create iam user so there are two parts
policies
roles
Policies are attached to user, like what all services user can pr can't access
roles are attached to application, what all access that application can have
So you have to permit ec2 to access S3
There are two ways for that
aws configure
attach role to ec2 instance
while 1 is tricky and legthy , 2 is easy
Go to ec2-instance-> Actions -> Security -> Modify IAM role -> then select role (ec2+s3 access role)
thats it , you can simply do aws s3 ls from ec2 instance

Put Object to S3 Bucket of another account

We are able to put objects into our S3 Bucket.
But now we have a requirement that we need to put these Object directly to an S3 Bucket which belongs to a different account and different region.
Here we have few questions:
Is this possible?
If possible what changes we need to do for this?
They have provided us Access Key, Secret Key, Region, and Bucket details.
Any comments and suggestions will be appreciated.
IAM credentials are associated with a single AWS Account.
When you launch your own Amazon EC2 instance with an assigned IAM Role, it will receive access credentials that are associated with your account.
To write to another account's Amazon S3 bucket, you have two options:
Option 1: Your credentials + Bucket Policy
The owner of the destination Amazon S3 bucket can add a Bucket Policy on the bucket that permits access by your IAM Role. This way, you can just use the normal credentials available on the EC2 instance.
Option 2: Their credentials
It appears that you have been given access credentials for their account. You can use these credentials to access their Amazon S3 bucket.
As detailed on Working with AWS Credentials - AWS SDK for Java, you can provide these credentials in several ways. However, if you are using BOTH the credentials provided by the IAM Role AND the credentials that have been given to you, it can be difficult to 'switch between' them. (I'm not sure if there is a way to tell the Credentials Provider to switch between a profile stored in the ~/.aws/credentials file and those provided via instance metadata.)
Thus, the easiest way is to specify the Access Key and Secret Key when creating the S3 client:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("access_key_id", "secret_key_id");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
It is generally not a good idea to put credentials in your code. You should load them from a configuration file.
Yes, it's possible. You need to allow cross account S3 put operation in bucket's policy.
Here is a blog by AWS. It should help you in setting up cross account put action.

S3 Bucket without ACL - No permission

I found an issue with a S3 bucket.
The bucket don't have any ACL associated, and the user that create the bucket was deleted.
How it's possible add some ACL in the bucket to get the control back?
For any command using AWS CLI, the result are the same always: An error occurred (AccessDenied) when calling the operation: Access Denied
Also in AWS console the access is denied.
First things first , AccessDenied error in AWS indicates that your AWS user does not have access to S3 service , Get S3 permission to your IAM user account , if in case you had access to AWS S3 service.
The thing is since you are using cli make sure AWS client KEY and secret are still correctly in local.
Now the interesting use case :
You have access to S3 service but cannot access the bucket since the bucket had some policies set
In this case if user who set the policies left and no user was able to access this bucket, the best way is to ask AWS root account holder to change the bucket permissions
An IAM user with the managed policy named AdministratorAccess should be able to access all S3 buckets within the same AWS account. Unless you have applied some unusual S3 bucket policy or ACL, in which case you might need to log in as the account's root user and modify that bucket policy or ACL.
See Why am I getting an "Access Denied" error from the S3 when I try to modify a bucket policy?
I just posted this on a related thread...
https://stackoverflow.com/a/73977525/999943
https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-owner-full-control-acl/
Basically when putting objects from the non-bucket owner, you need to set the acl at the same time.
--acl bucket-owner-full-control

How to sync data between two s3 buckets owned by different profiles

I want to sync data between two s3 buckets.
The problem is that each one is owned by different AWS accounts (i.e. access key id and secret access key).
I tried to make the destination bucket publicly writable, but I still get
fatal error: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
How to solve this?
I solved by giving permissions to write the destination bucket to the source bucket's AWS account.
I went to bucket "Permissions" tab of the destination bucket, "Access for other AWS accounts" and I gave permissions to the source bucket's AWS account by using the account email.
Then I copied the files by using AWS CLI (don't forget to grant full access to the recipient account!):
aws s3 cp s3://<source_bucket>/<folder_path>/ s3://<destination_bucket> --recursive --profile <source_AWSaccount_profile> --grants full=emailaddress=<destination_account_emailaddress>

S3 and IAM settings update

We are in a strange stage at the moment. Our DevOps guy left the organization. Now when we disable his keys in IAM. We saw this kinda error in production. "An error occurred (AccessDenied) when calling the PutObject operation: Access Denied when trying to upload an object on your bucket: XXXXX-prd-asset-images/." If i check Devops Guy IAM , i can see last used as S3 service. Guys i can understand its a half information but any help would be appreciated.
Can we look at prod instances if AWS keys stored there?
Can we check any policy?
Can we check bucket information?
That Devops guy had his AWS Keys being used for AWS CLI.
You need to create a generic account in AWS IAM which is not used by any developer and system administrator to avoid this situation in future.
Now do one thing create a generic account which has same IAM policies as that of your Devops guy account. SSH to the server. Go to this file ~/.aws/config there you will find AWS Key and AWS Secret replace that with the new key and secret of the account generated above.
Or you can run following and paste the Key and Access key when prompted and also the proper region for your EC2 instance.
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json