So I think that the simplest solution is my problem is to use AWS for everything but I wanted to understand what is possible:
I understand that IAM roles can be associated with an AWS service such as EC2 or Lambda so that an application/function running within that service can retrieve credentials to sign API requests to other AWS services.
I have a previous application running on Heroku and using Amazon S3. Currently I have an IAM user set up for this application which signs requests to the AWS API using the access keys associated with the IAM user account. I think that best practice is to use an IAM role rather than a user for application source code AWS API calls, however is it possible to set this up for the application hosted outside of AWS or would I need to migrate the application to AWS EC2 in order to use IAM roles?
It doesn't matter where the application is hosted but to assume an IAM role you will need IAM credentials (chicken and egg). Typically you would design a secure way for your app to retrieve these base credentials. This is one disadvantage of running your compute outside of AWS (because it can't automatically assume an IAM role).
One option would be to create an IAM user whose only permissions were to be able to assume a given IAM role. Supply those IAM user credentials to your application, outside of AWS, securely and have the application assume the IAM role, ideally with an ExternalId that itself is also securely stored and securely retrieved by your application. Additionally, you can manage access to the IAM role, for example defining which principals can assume the role, and under which conditions.
AWS announced a new feature AWS IAM Anywhere that should help if you need to avoid using access/secret keys. It's more complicated but follows security best practices.
AWS Identity and Access Management (IAM) now enables workloads that
run outside of AWS to access AWS resources using IAM Roles Anywhere.
IAM Roles Anywhere allows your workloads such as servers, containers,
and applications to use X.509 digital certificates to obtain temporary
AWS credentials and use the same IAM roles and policies that you have
configured for your AWS workloads to access AWS resources.
and more here:
create a trust anchor where you either reference your AWS
Certificate Manager Private Certificate Authority (ACM Private CA) or
register your own certificate authorities (CAs) with IAM Roles
Anywhere. By adding one or more roles to a profile and enabling IAM
Roles Anywhere to assume these roles, your applications can now use
the client certificate issued by your CAs to make secure requests to
AWS and get temporary credentials to access the AWS environment.
AWS Announcement: https://aws.amazon.com/about-aws/whats-new/2022/07/aws-identity-access-management-iam-roles-anywhere-workloads-outside-aws/
User Guide:
https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html
From Heroku docs:
Because of the sensitive nature of your S3 credentials, you should never commit them to version control. Instead, set them as the values of config vars for the Heroku apps that will use them.
Use the heroku config:set to set both keys
heroku config:set AWS_ACCESS_KEY_ID=xxx AWS_SECRET_ACCESS_KEY=yyy
Adding config vars and restarting app... done, v21
AWS_ACCESS_KEY_ID => xxx
AWS_SECRET_ACCESS_KEY => yyy
The above is in line with AWS's own best practices for managing AWS access keys, specifically not embedding access keys directly in code.
You can't use IAM roles in the sense that it is picked up automatically by AWS, outside of AWS, without specifying credentials specifically.
Your next best option is environment variables (as detailed above), specifying the access key ID and secret access key for a user with a role granting the least privilege required for the files they need to read from S3 e.g. specific bucket name, specific files, even specific IP addresses if possible, etc.
Related
I am trying to follow best practices, but the documentation is not clear to me. I have a python script running locally that will move some files from my local drive to S3 for processing. Lambda picks it up from there and does the rest. So far I set up an AWS User for this process, and connected it to a "policy" that only has access to the needed resources.
Next step is to move my scripts to a docker container in my local server. But I thought best practice would be to use a Role with policies, instead of a User with policies. However, according to this documentation... in order to AssumeRole... I have to first be signed in as a user.
The calls to AWS STS AssumeRole must be signed with the access key ID
and secret access key of an existing IAM user or by using existing temporary
credentials such as those from another role. (You cannot call AssumeRole
with the access key for the root account.) The credentials can be in
environment variables or in a configuration file and will be discovered
automatically by the boto3.client() function.
So no matter what, I'll need to embed my user credentials into my docker image (or at least a separate secrets file)
If that is the case, then it seems adding a "Role" in the middle between the User and the Policies seems completely useless and redundant. Can anyone confirm or correct?
Roles and policies are for services running in AWS environments. For a Role you define a Trust Policy. The Trust Policy defines what principal (User, Role, AWS Service etc.) can assume it. You also define the permissions that the principal which assumes it has to access AWS services.
For services running inside AWS (EC2, Lambda, ECS), it is always possible to select an IAM role, which will be assumed by your service. This way your application will always get temporary credentials corresponding to the IAM role and you should never use an AWS Access Key Id and Secret.
However, this isn't possible for services running locally or outside of AWS environment. For your Docker container running locally, the only real option would be to create an Access Key ID and Secret and copy it there. There are still some things you can do to keep your account secure:
Follow the least privilege principal. Create a policy that provides access to only the absolutely required resources.
Create a user (programmatic access only) and add the policy. Use AWS Access Key ID and Secret of this user for your Docker container.
Make sure that the AWS Credentials are rotated regularly.
Make sure that the secrets aren't committed in source control, prefer a secrets file or a Vault system than environmental variables.
We can leverage AWS services from within AWS infrastructure using the ACCESS_ID/ACCESS_SECRET or by assigning the IAM role.
What if I want to access the services from an instance outside of AWS. ex. DigitalOcean. I know that using the ACCESS_Key is not a good option. What is the recommended practice as an alternative to assigning the roles to EC2 instances
API calls to AWS go to public endpoints on the Internet. Therefore, they are accessible from anywhere on the Internet, not just within AWS.
Therefore, you should use the same method for connecting to AWS both inside AWS and outside AWS.
Using the Access Key and Secret Key as credentials is the correct method.
To assume an IAM Role, you must have an initial set of AWS credentials, so that AWS can confirm that you are entitled to assume the role. For example, an IAM User can provide their credentials to assume an IAM Role.
You can also assign an IAM Role to an Amazon EC2 instance. In this situation, the AWS service will automatically assume the role on behalf of the instance, and will provide the resulting credentials through the EC2 instance metadata service.
If you are using your own computer (not an Amazon EC2 instance), it is not possible to assign an IAM Role. Instead, use an Access Key + Secret Key. They should be stored in your ~/.aws/credentials file via the AWS CLI aws configure command. Never put actual credentials in your code files, since this can be a security risk (eg having credentials stored in GitHub).
AWS announced a new feature AWS IAM Anywhere that should help if you need to avoid using access/secret keys.
AWS Identity and Access Management (IAM) now enables workloads that
run outside of AWS to access AWS resources using IAM Roles Anywhere.
IAM Roles Anywhere allows your workloads such as servers, containers,
and applications to use X.509 digital certificates to obtain temporary
AWS credentials and use the same IAM roles and policies that you have
configured for your AWS workloads to access AWS resources.
and more here:
create a trust anchor where you either reference your AWS
Certificate Manager Private Certificate Authority (ACM Private CA) or
register your own certificate authorities (CAs) with IAM Roles
Anywhere. By adding one or more roles to a profile and enabling IAM
Roles Anywhere to assume these roles, your applications can now use
the client certificate issued by your CAs to make secure requests to
AWS and get temporary credentials to access the AWS environment.
AWS Announcement: https://aws.amazon.com/about-aws/whats-new/2022/07/aws-identity-access-management-iam-roles-anywhere-workloads-outside-aws/
User Guide:
https://docs.aws.amazon.com/rolesanywhere/latest/userguide/introduction.html
I have a Django project (deployed on AWS Elastic Beanstalk) that uses several AWS services (SES, S3, etc.) through an IAM user. I am wondering what the best way to store this IAM user's credentials in the Django project is.
I have thought of a few approaches and have a few questions for each:
Make a .env file with the credentials. Can this be hacked though? Is this the most secure way?
Use Amazon Secrets Manager to create a secret with the credentials. I tried this, but then realized that you need to supply credentials to use it (😅).
Is there a better method? What would you recommend?
Don't use IAM user credentials on AWS compute services (EC2, Elastic Beanstalk, Lambda, etc.)
Instead, use an IAM role. See Managing Elastic Beanstalk instance profiles.
Per What is the difference between an IAM role and an IAM user?
An IAM user has permanent long-term credentials and is used to
directly interact with AWS services. IAM roles
are meant to be assumed by authorized entities, such as
applications ...
I am trying to setup spinnaker locally to manage AWS EC2 instances. The current documentaion depicts the steps which need to have spinnaker instance to be running on EC2. They are creating one role and attaching it to spinnaker instance. As I am running spinnaker in my local environment, I am finding a way which will allow my local spinnaker instance to access the AWS resources. Will it be possible to have one such policy/role ? May be using AWS-STS ( Security Toke Service ), but i dont know how to use that creds with spinnaker instance
You can do this directly by creating an IAM user with required policies to access AWS Resources and use the Programmatic access Credentials in your local machine to use AWS CLI, API or SDKs.
For an existing IAM User, the step are as follows.
IAM User -> Security Credentials -> Create Access Keys
Note: If you cannot trust your local environment, then you can use AWS STS service (For this you need to implement a separate service, where you can pass user credentials and request for a temporal token from AWS STS)
You can create the IAM role for your local machine to assume, like
this example, or stricter,
spinnaker will handle the STS assume role given its configured properly
as for the temporary credential, if what you mean is MFA compatibility, I am myself still figuring out the way to do it. I think one workaround is to create a wrapper script that call sts:assumeRole, ask the user to provide the MFA token, then set AWS_ACCESS_KEY, AWS_SECRET_KEY, and AWS_SESSION_TOKEN which will be honored by clouddriver, but then deployment to multiple AWS accounts will be a problem
I have an application on an EC2 Instance that I wish to put on the AWS Marketplace. The application uses AmazonS3 and on startup requires users to enter an Access Key, Secret Key, and a BucketName. It then uses the Accekey, and secretkey to create a bucket (specified by BucketName). However, this isn't allowed on the AWS Marketplace.
However, for AWS Marketplace,we require application authors to use AWS
Identity and Access Management (IAM) roles and do not permit the use
of access or secret keys.
Question
I am confused as to how to get around this and still put my AMI on the AWS Marketplace. My goal is for users to create their own S3 buckets in their own AWS Environments.
Your customers can create AWS IAM roles with access to the required resources (S3 buckets), and allow your account to use those roles.
The reasoning behind this mechanism is that your customers can follow the principle of least privilege and limit access to very specific resources and actions on those resources (instead of providing unsecured / root access to their entire account)