Authenticate AmazonS3 client without Credentials - amazon-web-services

I am trying to upload to an S3 bucket using the AmazonS3 client. I create it using the following code:
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
This uses the .aws/credentials file to authenticate. My problem is that when I push this to an EC3 environment (not locally), it fails because the environment doesn't have this .aws/credentials file on it and we are not allowed to add the credentials for security reasons.
How can I get around this?

You should use AWS IAM Role to authenticate AWS services. AWS IAM Role should have AWS S3 necessary permission and attach the role to an EC2 instance. Whenever you make S3 request, it will authenticate through IAM role. By the way, you don't need credentials file in the application.

You need to create an instance profile the EC2 then will be able to access whatever resources are in your account as long as your instance profile role has those permissions. Instance profile creation

Related

Get access from account1's ec2 to account2 s3 object using was sdk

I have running app on auto-scaled ec2 env. of account1 created via AWS CDK (it also should have support to be run on multiple regions). During the app execution I need to get object from account2's s3.
One of the ways to get s3 data is use tmp credentials(via sts assume role):
on account1 side create a policy for ec2 instance role to assume sts tmp credentials for s3 object
on account2 side create a policy GetObject access to the s3 object
on account2 site create role and attach point2's policy to it + trust relationship to account1's ec2 role
Pros: no user credentials are required to get access to the data
Cons: after each env update requires manual permission configuration
Another way is to create a user in account2 with permission to get s3 object and put the credentials on account1 side.
Pros: after each env update doesn't require manual permission configuration
Cons: Exposes IAM user's credentials
Is there a better option to eliminate manual permission config and explicit IAM user credentials sharing?
You can add a Bucket Policy on the Amazon S3 bucket in Account 2 that permits access by the IAM Role used by the Amazon EC2 instance in Account 1.
That way, the EC2 instance(s) can access the bucket just like it is in the same Account, without have to assume any roles or users.
Simply set the Principal to be the ARN of the IAM Role used by the EC2 instances.

How does AWS SDK know the credentials without specifying it?

I am curious about how AWS SDK can access services locally such as S3 without explicitly providing credentials. For example, this python code is only provided with bucket name and key name but can still access the file from s3 on my local:
def s3():
bucket = "my-bucket"
file_name = "folder1/sample.json"
s3 = boto3.client('s3')
obj = s3.get_object(Bucket=bucket, Key=file_name)
file_content = obj["Body"].read().decode('utf-8')
Where did AWS SDK get the credentials? Does it use the role configured using the command aws configure in the CLI? How about if you provide an explicit access key and secret key, what is the level of priority?
All of the Amazon SDK's follow a similar pattern. For boto3, they are documented here but for completeness they are:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
It depends on how your environment is configured but it sounds like you have a ~/.aws/credentials file.

Put Object to S3 Bucket of another account

We are able to put objects into our S3 Bucket.
But now we have a requirement that we need to put these Object directly to an S3 Bucket which belongs to a different account and different region.
Here we have few questions:
Is this possible?
If possible what changes we need to do for this?
They have provided us Access Key, Secret Key, Region, and Bucket details.
Any comments and suggestions will be appreciated.
IAM credentials are associated with a single AWS Account.
When you launch your own Amazon EC2 instance with an assigned IAM Role, it will receive access credentials that are associated with your account.
To write to another account's Amazon S3 bucket, you have two options:
Option 1: Your credentials + Bucket Policy
The owner of the destination Amazon S3 bucket can add a Bucket Policy on the bucket that permits access by your IAM Role. This way, you can just use the normal credentials available on the EC2 instance.
Option 2: Their credentials
It appears that you have been given access credentials for their account. You can use these credentials to access their Amazon S3 bucket.
As detailed on Working with AWS Credentials - AWS SDK for Java, you can provide these credentials in several ways. However, if you are using BOTH the credentials provided by the IAM Role AND the credentials that have been given to you, it can be difficult to 'switch between' them. (I'm not sure if there is a way to tell the Credentials Provider to switch between a profile stored in the ~/.aws/credentials file and those provided via instance metadata.)
Thus, the easiest way is to specify the Access Key and Secret Key when creating the S3 client:
BasicAWSCredentials awsCreds = new BasicAWSCredentials("access_key_id", "secret_key_id");
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
.build();
It is generally not a good idea to put credentials in your code. You should load them from a configuration file.
Yes, it's possible. You need to allow cross account S3 put operation in bucket's policy.
Here is a blog by AWS. It should help you in setting up cross account put action.

EC2 instance to assume an IAM role so that I don't have to enter token everytime to use services on AWS

On my client's AWS account, security credentials are generated everytime we login to their AWS sandbox account. This credentials file is automatically generated and downloaded via a Chrome plugin(SAML to AWS STS Key Conversion).
We then have to place the generated content to the ./aws/credentials file inside an EC2 instance in the same AWS account. This is little inconvenient as we have to update the generated credentials and session_token into the credentials file inside the EC2 instance every time we launch a Terraform script.
Is there any way we can attach any role so that we can just use the EC2 instance without entering the credentials into the credentials file.
Please suggest.
Work out what a reasonable, minimal set of permissions the Terraform script needs to create its AWS resources, then create an IAM role with those permissions, then add that IAM role to the instance (or launch a new instance with the role). Don't have a ~/.aws/credentials file on the instance or it will take precedence over the IAM role-based credentials.

boto3 s3 role arn

I can't use boto3 to connect to S3 with a role arn provided 100% programmatically.
session = boto3.Session(role_arn="arn:aws:iam::****:role/*****",
RoleSessionName="****")
s3_client = boto3.client('s3',
aws_access_key_id="****",
aws_secret_access_key="****")
for b in s3_client.list_buckets()["Buckets"]:
print (b["Name"])
I can't provide arn info to Session and also client and there is no assume_role() on a client based on s3.
I found a way with a sts temporary token but I don't like that.
sess = boto3.Session(aws_access_key_id="*****",
aws_secret_access_key="*****")
sts_connection = sess.client('sts')
assume_role_object = sts_connection.assume_role(RoleArn="arn:aws:iam::***:role/******",
RoleSessionName="**",
DurationSeconds=3600)
session = boto3.Session(
aws_access_key_id=assume_role_object['Credentials']['AccessKeyId'],
aws_secret_access_key=assume_role_object['Credentials']['SecretAccessKey'],
aws_session_token=assume_role_object['Credentials']['SessionToken'])
s3_client = session.client('s3')
for b in s3_client.list_buckets()["Buckets"]:
print (b["Name"])
Do you have any idea ?
You need to understand how temporary credentials are created.
First you need to create a client using your current access keys. These credentials are then used to verify that you have the permissions to call assume_role and have the rights to issue credentials from the IAM role.
If someone could do it your way, there would be a HUGE security hole with assume_role. Your rights must be validated first, then you can issue temporary credentials.
Firstly, never put an Access Key and Secret Key in your code. Always store credentials in a ~/.aws/credentials file (eg via aws configure). This avoids embarrassing situations where your credentials are accidentally released to the world. Also, if you are running on an Amazon EC2 instance, then simply assign an IAM Role to the instance and it will automatically obtain credentials.
An easy way to assume a role in boto3 is to store the role details in the credentials file with a separate profile. You can then reference the profile when creating a client and boto3 will automatically call assume-role on your behalf.
See: boto3: Assume Role Provider