How to access a public S3 bucket from another AWS account? - amazon-web-services

In one of the blog post, the author has mentioned that he uploaded dataset into a s3 bucket and gave public access.
s3://us-east-1.elasticmapreduce.samples/flightdata/input
Now I want to download/see the data from my chrome browser.
When I copy paste above link in chrome address bar it is asking for:
Access key ID
Secret access key
What should I give here?
Did the author initially made it public and now made it private?
(I am confused)
Also can we access these kind of URLs that start with s3:// directly from browsers?
Should I need to have a AWS account to access these S3 buckets?
(I know we can access web data using http protocol.. http://)

The Amazon S3 management console allows you to view buckets belonging to your account. It is not possible to view S3 buckets belonging to other accounts within the S3 console.
You can, however, access them via the AWS Command-Line Interface (CLI). For example:
aws s3 ls s3://us-east-1.elasticmapreduce.samples/flightdata/input/
You can also copy files from other buckets by using aws s3 cp and aws s3 sync.
These calls require a set of valid AWS credentials (Access Key and Secret Key), which can be stored in the credentials files via the aws configure command. You do not need specific permission to access public buckets, but you do need permission to use S3 in general. You can obtain an Access Key and Secret Key in the IAM management console where your IAM User is defined. (Or, if you do not have permission to view it, ask your AWS administrator for the Access Key and Secret Key.)

Related

How to add write permissions to everyone for AWS s3 bucket

I am using an s3 bucket and I would like to grant write permission to everyone. The AWS console is not allowing me to do this instead it is asking to use AWS CLI to enable write permission. How can enable write permissions to everyone using AWS CLI
Granting public Read access is acceptable from a security perspective if the data is intended to be public, or it is files for a public website. This can be granted via a Bucket Policy. You will also need to deactivate Block Public Access on the bucket.
Granting public Write access is not a good idea. For example, somebody could upload the entire world's collection of copyright movies. You would be charged for the storage and you would be in violation of copyright laws. Similarly, if you allow public Read access, you would be charged for all Data Transfer charges for downloading content from the bucket, which could be considerable.
Instead, your application should control access to Amazon S3. If a user is permitted to upload to your S3 bucket, your application permit Uploading objects using presigned URLs. This way, a user can only upload if your application permits it, and there can be restrictions on things like filetype, size and filename.
Similarly, it is possible to use Amazon S3 pre-signed URLs to grant time-limited Read access to private objects stored in Amazon S3.
So, yes, you can grant public Write access via the S3 management console, but I would advise against it.
John is correct in that in 99% of cases you should not enable write access to a bucket for everyone.
However, in my case I am developing a tool for uploading objects to S3 and I want to test all possible edge cases, including uploading to an S3 bucket as an anonymous user. As the question indicates, the AWS Management Console does indeed not let you enable public write access to a bucket (for good reason! I bet this caused way too many incidents back when it let you do this!).
So if you are in my situation, then you can run:
aws s3api put-bucket-acl --bucket bucketname --acl public-read-write
Once you've completed your testing, you can re-run the command with --acl private to make the bucket private again. Or you can use the AWS Management Console, as it will let you disable write access.

How to access objects in S3 bucket, without making the object's folder public

I have provided AmazonS3FullAccess policy for both the IAM user and group. Also the buket that I am trying to access says "Objects can be public". I have explicitly made the folder inside the bucket public. Despite all this I am getting access denied error when I tried to access it through its url. Any idea on this?
Objects in Amazon S3 are private by default. This means that objects are not accessible by anonymous users.
You have granted permission for your IAM User to be able to access S3. Therefore, you have access to the objects but you must identify yourself to S3 so that it can verify your identity.
You should be able to access S3 content:
Via the Amazon S3 management console
Using the AWS CLI (eg aws s3 ls s3://bucketname)
Via authenticated requests in a web browser
I suspect that you have been accessing your bucket via an unauthenticated request (eg bucketname.s3.amazonaws.com/foo.txt. Unfortunately, this does not tell Amazon S3 who you are, so it will deny the request.
To access content with this type of URL, you can generate an Amazon S3 pre-signed URLs, which appends some authentication information to the URL to prove your identity. An easy way to generate the URL is with the AWS CLI:
aws s3 presign s3://bucketname/foo.txt
It will return a URL that looks like this:
https://bucketname.s3.amazonaws.com/foo.txt?AWSAccessKeyId=AKIAxxx&Signature=xxx&Expires=1608175109
The URL will be valid for one hour by default, up to 7 days.
There are two ways I will recommend.
go to s3 dashboard, and download the object you need, one by one manually, the bucket can be kept private at the same time.
build a gateway/a small service, to handle authentication for you, set a policy and give the permission to the service container/lambda to visit the private bucket, and restrict only specific users to download the objects.
References
download from aws s3
aws policy, permission and roles

Use AWS keys to transfer data between organizations

I am trying to move client data from clients S3 bucket(s3://client-bucket) to our organizations S3 bucket(s3://org-bucket) I was given access keys to the clients S3 bucket.
Using AWS CLI i am able to access S3 bucket of client as see all files. I cannot however use aws s3 mv because the profile that has access to client-bucket does not have permissions set up for org-bucket.
I am not allowed to move data to an intermediate public bucket bc of security issues/sensitivity of data.
What is the best way of making this transfer go thru? Is there a way to set up a profile in aws cli config/credentials with both the access keys to org-bucket and client-bucket?
The best way is to use the access keys in your organization to access your client's S3 bucket. Since you need to copy objects directly via the CopyObject API, your IAM user/role needs to have access to both the S3 bucket in your org AND your client's S3 bucket. Therefore, your current approach doesn't work and even AssumeRole would not work either. You can follow this guide to configure proper resource-based policies in S3.

How should client download resource from AWS S3

I'm kinda new to AWS S3 - using EC2 (hosting web app) and S3 (storing resources) in the same AWS region, and assigned EC2 with an IAM role s3access, so EC2 can download from S3 easily.
The question is, how should a client download from S3? Apparently the client doesn't have IAM role or Access Key like EC2 does. Seems the client only have a signedDownloadUrl generated by aws-sdk, but it also requires access key.
So, should I make the bucket public then any client can download, or should I find some approach to supply the client with credentials?
All objects by default are private. Only the object owner has permission to access these objects.
So if you want to share an object with someone you can
You can make it public or
You, the object owner can share objects with others by creating a pre-signed URL, using their own security credentials, to grant time-limited permission to download the objects.
For more details on pre-signed URLs refer S3 Share Objects with PreSignedURL

upload to s3 from ec2 without access key

Can you connect to S3 via s3cmd or mount S3 to and ec2 instance with IAM users and not using access keys?
All the tutorials I see say to use access keys but what if you can't create your own access keys (IT policy).
There are two ways to access data in Amazon S3: Via an API, or via URLs.
Via an API
When accessing Amazon S3 via API (which includes code using an AWS SDK and also the AWS Command-Line Interface (CLI)), user credentials must be provided in the form of an Access Key and a Secret Key.
The aws and s3cmd utilities, and also software that mounts Amazon S3 as a drive, require access to the API and therefore require credentials.
If you have been given a login to an AWS account, you should be able to ask your administrators to also create credentials that are associated with your User. These credentials will have exactly the same permissions as your normal login (via Username/password), so it's strange that they would be disallowing it. They can be very useful for automating AWS activities, such as starting/stopping Amazon EC2 instances.
Via URLs
Objects stored in Amazon S3 can also be made available via a URL that points directly to the data, eg s3.amazonaws.com/bucket-name/object.txt
To provide public access to these objects without requiring credentials, either add permission to each object or create a Bucket Policy that grants access to content within the bucket.
This access method can be used to retrieve individual objects, but is not sufficient to mount Amazon S3 as a drive.