upload to s3 from ec2 without access key - amazon-web-services

Can you connect to S3 via s3cmd or mount S3 to and ec2 instance with IAM users and not using access keys?
All the tutorials I see say to use access keys but what if you can't create your own access keys (IT policy).

There are two ways to access data in Amazon S3: Via an API, or via URLs.
Via an API
When accessing Amazon S3 via API (which includes code using an AWS SDK and also the AWS Command-Line Interface (CLI)), user credentials must be provided in the form of an Access Key and a Secret Key.
The aws and s3cmd utilities, and also software that mounts Amazon S3 as a drive, require access to the API and therefore require credentials.
If you have been given a login to an AWS account, you should be able to ask your administrators to also create credentials that are associated with your User. These credentials will have exactly the same permissions as your normal login (via Username/password), so it's strange that they would be disallowing it. They can be very useful for automating AWS activities, such as starting/stopping Amazon EC2 instances.
Via URLs
Objects stored in Amazon S3 can also be made available via a URL that points directly to the data, eg s3.amazonaws.com/bucket-name/object.txt
To provide public access to these objects without requiring credentials, either add permission to each object or create a Bucket Policy that grants access to content within the bucket.
This access method can be used to retrieve individual objects, but is not sufficient to mount Amazon S3 as a drive.

Related

How to access objects in S3 bucket, without making the object's folder public

I have provided AmazonS3FullAccess policy for both the IAM user and group. Also the buket that I am trying to access says "Objects can be public". I have explicitly made the folder inside the bucket public. Despite all this I am getting access denied error when I tried to access it through its url. Any idea on this?
Objects in Amazon S3 are private by default. This means that objects are not accessible by anonymous users.
You have granted permission for your IAM User to be able to access S3. Therefore, you have access to the objects but you must identify yourself to S3 so that it can verify your identity.
You should be able to access S3 content:
Via the Amazon S3 management console
Using the AWS CLI (eg aws s3 ls s3://bucketname)
Via authenticated requests in a web browser
I suspect that you have been accessing your bucket via an unauthenticated request (eg bucketname.s3.amazonaws.com/foo.txt. Unfortunately, this does not tell Amazon S3 who you are, so it will deny the request.
To access content with this type of URL, you can generate an Amazon S3 pre-signed URLs, which appends some authentication information to the URL to prove your identity. An easy way to generate the URL is with the AWS CLI:
aws s3 presign s3://bucketname/foo.txt
It will return a URL that looks like this:
https://bucketname.s3.amazonaws.com/foo.txt?AWSAccessKeyId=AKIAxxx&Signature=xxx&Expires=1608175109
The URL will be valid for one hour by default, up to 7 days.
There are two ways I will recommend.
go to s3 dashboard, and download the object you need, one by one manually, the bucket can be kept private at the same time.
build a gateway/a small service, to handle authentication for you, set a policy and give the permission to the service container/lambda to visit the private bucket, and restrict only specific users to download the objects.
References
download from aws s3
aws policy, permission and roles

Use AWS keys to transfer data between organizations

I am trying to move client data from clients S3 bucket(s3://client-bucket) to our organizations S3 bucket(s3://org-bucket) I was given access keys to the clients S3 bucket.
Using AWS CLI i am able to access S3 bucket of client as see all files. I cannot however use aws s3 mv because the profile that has access to client-bucket does not have permissions set up for org-bucket.
I am not allowed to move data to an intermediate public bucket bc of security issues/sensitivity of data.
What is the best way of making this transfer go thru? Is there a way to set up a profile in aws cli config/credentials with both the access keys to org-bucket and client-bucket?
The best way is to use the access keys in your organization to access your client's S3 bucket. Since you need to copy objects directly via the CopyObject API, your IAM user/role needs to have access to both the S3 bucket in your org AND your client's S3 bucket. Therefore, your current approach doesn't work and even AssumeRole would not work either. You can follow this guide to configure proper resource-based policies in S3.

Overcome 1000 bucket limit in S3 / use access points

I have 1 s3 bucket per customer. Customers are external entities and they dont share data with anyone else. I write to S3 and customer reads from S3. As per this architecture, I can only scale to 1000 buckets as there is a limit to s3 buckets per account. I was hoping to use APs to create 1 AP per customer and put data in one bucket. The customer can then read the files from the bucket using AP.
Bucket000001/prefix01 . -> customeraccount1
Bucket000001/prefix02 . -> customeraccount2
...
S3 access points require you to set policy for a IAM user in access point as well as the bucket level. If I have 1000s of IAM users, do I need to set policy for each of them in the bucket? This would result in one giant policy. there is a max policy size in the bucket, so I may not be able to do that.
Is this the right use case where access points can help?
The recommended approach would be:
Do NOT assign IAM Users to your customers. These types of AWS credentials should only be used by your internal staff and your own applications.
You should provide a web application (or an API) where customers can authenticate against your own user database (or you could use Amazon Cognito to manage authentication).
Once authenticated, the application should grant access either to a web interface to access Amazon S3, or the application should provide temporary credentials for accessing Amazon S3 (more details below).
Do not use one bucket per customer. This is not scalable. Instead, store all customer data in ONE bucket, with each user having their own folder. There is no limit on the amount of data you can store in Amazon S3. This also makes it easier for you to manage and maintain, since it is easier to perform functions across all content rather than having to go into separate buckets. (An exception might be if you wish to segment buckets by customer location (region) or customer type. But do not use one bucket per customer. There is no reason to do this.)
When granting access to Amazon S3, assign permissions at the folder-level to ensure customers only see their own data.
Option 1: Access via Web Application
If your customers access Amazon S3 via a web application, then you can code that application to enforce security at the folder level. For example, when they request a list of files, only display files within their folder.
This security can be managed totally within your own code.
Option 2: Access via Temporary Credentials
If your customers use programmatic access (eg using the AWS CLI or a custom app running on their systems), then:
The customer should authenticate to your application (how this is done will vary depending upon how you are authenticating users)
Once authenticated, the application should generate temporary credentials using the AWS Security Token Service (STS). While generating the credentials, grant access to Amazon S3 but specify the customer's folder in the ARN (eg arn:aws:s3:::storage-bucket/customer1/*) so that they can only access content within their folder.
Return these temporary credentials to the customer. They can then use these credentials to make API calls directly to Amazon S3 (eg from the AWS Command-Line Interface (CLI) or a custom app). They will be limited to their own folder.
This approach is commonly done with mobile applications. The mobile app authenticates against the backend, receives temporary credentials, then uses those credentials to interact directly against S3. Thus, the back-end app is only used for authentication.
Examples on YouTube:
5 Minutes to Amazon Cognito: Federated Identity and Mobile App Demo
Overview Security Token Service STS
AWS: Use the Session Token Service to Securely Upload Files to S3
We have some way to achieve your goal.
use IAM group to grant access to a folder. Create a group, add a user to a group, and assign a role to the group to access the folder.
Another way is to use bucket policy (${aws:username} in Condition) to grant Access to User-Specific Folders. Refer to this link https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/

How to access a public S3 bucket from another AWS account?

In one of the blog post, the author has mentioned that he uploaded dataset into a s3 bucket and gave public access.
s3://us-east-1.elasticmapreduce.samples/flightdata/input
Now I want to download/see the data from my chrome browser.
When I copy paste above link in chrome address bar it is asking for:
Access key ID
Secret access key
What should I give here?
Did the author initially made it public and now made it private?
(I am confused)
Also can we access these kind of URLs that start with s3:// directly from browsers?
Should I need to have a AWS account to access these S3 buckets?
(I know we can access web data using http protocol.. http://)
The Amazon S3 management console allows you to view buckets belonging to your account. It is not possible to view S3 buckets belonging to other accounts within the S3 console.
You can, however, access them via the AWS Command-Line Interface (CLI). For example:
aws s3 ls s3://us-east-1.elasticmapreduce.samples/flightdata/input/
You can also copy files from other buckets by using aws s3 cp and aws s3 sync.
These calls require a set of valid AWS credentials (Access Key and Secret Key), which can be stored in the credentials files via the aws configure command. You do not need specific permission to access public buckets, but you do need permission to use S3 in general. You can obtain an Access Key and Secret Key in the IAM management console where your IAM User is defined. (Or, if you do not have permission to view it, ask your AWS administrator for the Access Key and Secret Key.)

How to put an app on the AWS Marketplace that requires S3 resources

I have an application on an EC2 Instance that I wish to put on the AWS Marketplace. The application uses AmazonS3 and on startup requires users to enter an Access Key, Secret Key, and a BucketName. It then uses the Accekey, and secretkey to create a bucket (specified by BucketName). However, this isn't allowed on the AWS Marketplace.
However, for AWS Marketplace,we require application authors to use AWS
Identity and Access Management (IAM) roles and do not permit the use
of access or secret keys.
Question
I am confused as to how to get around this and still put my AMI on the AWS Marketplace. My goal is for users to create their own S3 buckets in their own AWS Environments.
Your customers can create AWS IAM roles with access to the required resources (S3 buckets), and allow your account to use those roles.
The reasoning behind this mechanism is that your customers can follow the principle of least privilege and limit access to very specific resources and actions on those resources (instead of providing unsecured / root access to their entire account)