Automatic login to AWS resources - amazon-web-services

Thus far I get access to my AWS resources using Access Key Id and Secret Access Key. But every time I end my session I have to manually enter these keys when typing aws configure
Is there an automated way, perhaps with SSH private key on the local host ?

Generally speaking when you use "aws configure", and enter your credentials, those credentials are saved in the .aws/credentials file in a path on your machine (exactly where will depend on the OS). You shouldn't have to run 'aws configure' again unless your credentials change.
Once that is done - one time - every further execution of an AWS CLI command should just use those stored credentials - you should not have to ever enter them more than once.

Aws provides this feature to handle access and secret keys in the format of profiles..
Let say if you have multiple accounts.. or multiple regions..
You can setup those as profiles with the help of aws configure --profile <profilename>
And when performing operations in one particular account in one particular region..
export DEFAULT_AWS_PROFILE=<profilename>
By doing this it is easy to work with multiple envs.

Related

How to give an untrusted VM partial or temporary access to my AWS privileges?

I have an AWS account with full access to DynamoDB.
I am writing an application that uses DynamoDB. I would like to test this application backed by the real DynamoDB (and not any local compatible/mock solution). However, the test application is not as secure a real production-ready application, and there is a real risk that during my tests an attacker may break into the test machine. If my real AWS credentials (needed to write to DynamoDB) are on that machine, they may be stolen and the attacker can basically do anything that I can do on DynamoDB - e.g., create expensive VMs in my account next week and mine for bitcoin.
So I'm looking for an alternative to saving my real AWS credentials (access key id and secret access key) on the test machine.
I read about Amazon's signature algorithm v4, and it turns out that its signature process is actually two-staged: First a "signing key" is calculated from the full credentials and this signing key works only for a single day on a single service - and then this "signing key" is used to sign the individual messages. This suggests that I could calculate the signing key on a secure machine and send it to the test machine - and the test machine will only do the second stage of the signature algorithm, and will only be able to use DynamoDB and only for a single day.
If I could do this, this would solve my problem, but the problem is that I couldn't figure out how I can tell boto3 to only do the second stage of the signing. It seems it always takes the full credentials aws_access_key_id and aws_secret_access_key - and does both stages of the signature. Is there a way to configure it to only do the second stage?
Alternatively, is there a different way in AWS or IAM or something, where someone like me that has credentials can use them to create temporary keys that can be used only for a short amount of time and/or only one specific service?
create temporary keys that can be used only for a short amount of time and/or only one specific service
Yes, that's why AWS STS service exists. Specifically you can use GetSessionToken which:
Returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token.
You can also create IAM roles, and used STS's AssumeRole for the same thing. Actually using IAM roles for instances is the most preferred way to give temporary permissions for the applications on EC2 instances. This way you don't have to use your own credentials at all.

AWS programatic credential use in automation scripts

Right now we create scripts that run through CLI to automate or fetch things from AWS.
But we used AWS access key/ secret access/session token for the same.
these keys and tokens are valid for 1 hour. Hence next hour if we do use them, the script will fail.
But it is also not possible to fetch the temp credentials, update the script, and run those.
So what is the best possible solution in this condition? What should I do that I can get the updated credentials and I can run the script by using those updated credentials (automatically)? Or any other alternatives so that we can still run scripts from our local machines using Boto with AWS credentials?
any help is appreciated.
Bhavesh
I'm assuming that your script runs outside of AWS, otherwise you would simply configure your compute (EC2, Lambda, etc.) to automatically assume an IAM role.
If you have persistent IAM User credentials that allow you to assume the relevant role then use those.
If you don't then take a look at the new IAM Roles Anywhere feature.

AWS boto3 user vs role

I am trying to follow best practices, but the documentation is not clear to me. I have a python script running locally that will move some files from my local drive to S3 for processing. Lambda picks it up from there and does the rest. So far I set up an AWS User for this process, and connected it to a "policy" that only has access to the needed resources.
Next step is to move my scripts to a docker container in my local server. But I thought best practice would be to use a Role with policies, instead of a User with policies. However, according to this documentation... in order to AssumeRole... I have to first be signed in as a user.
The calls to AWS STS AssumeRole must be signed with the access key ID
and secret access key of an existing IAM user or by using existing temporary
credentials such as those from another role. (You cannot call AssumeRole
with the access key for the root account.) The credentials can be in
environment variables or in a configuration file and will be discovered
automatically by the boto3.client() function.
So no matter what, I'll need to embed my user credentials into my docker image (or at least a separate secrets file)
If that is the case, then it seems adding a "Role" in the middle between the User and the Policies seems completely useless and redundant. Can anyone confirm or correct?
Roles and policies are for services running in AWS environments. For a Role you define a Trust Policy. The Trust Policy defines what principal (User, Role, AWS Service etc.) can assume it. You also define the permissions that the principal which assumes it has to access AWS services.
For services running inside AWS (EC2, Lambda, ECS), it is always possible to select an IAM role, which will be assumed by your service. This way your application will always get temporary credentials corresponding to the IAM role and you should never use an AWS Access Key Id and Secret.
However, this isn't possible for services running locally or outside of AWS environment. For your Docker container running locally, the only real option would be to create an Access Key ID and Secret and copy it there. There are still some things you can do to keep your account secure:
Follow the least privilege principal. Create a policy that provides access to only the absolutely required resources.
Create a user (programmatic access only) and add the policy. Use AWS Access Key ID and Secret of this user for your Docker container.
Make sure that the AWS Credentials are rotated regularly.
Make sure that the secrets aren't committed in source control, prefer a secrets file or a Vault system than environmental variables.

AWS CLI log in command

I am starting into the AWS world and I recently configured my local environment to connect to my AWS account through the terminal, but I’m having a hard time finding the correct command to log in. Could someone please point me how to do this.
Thank you beforehand
The AWS CLI does not "log in". Rather, each individual request is authenticated with a set of credentials (similar to a username and password). It's a bit like making a phone call -- you do not need to "log in" to your telephone. Instead, the system is configured to already know who you are.
To store credentials for use with the AWS CLI, you can run the aws configure command. It will prompt you for an Access Key and Secret Key, which will be stored in a configuration file. These credentials will then be used with future AWS CLI commands.
If you are using your own AWS Account, you can obtain an Access Key and Secret Key by creating an IAM User in the Identity and Access Management (IAM) management console. Simply select programmatic access to obtain these credentials. You will need to assign appropriate permissions to this IAM User. (It is not recommended to use your root login for such purposes.)
to login using shell you will need:
IAM keys (https://aws.amazon.com/premiumsupport/knowledge-center/create-access-key/)
AWS Cli (https://aws.amazon.com/cli/)

How to avoid using user profile to perform s3 operations without EC2 instances

According to many advices, we should not configure IAM USER but using IAM Role instead to avoid someone managed to grab the user confidential in .aws folder.
Lets say I don't have any EC2 instances. Can I still able to perform S3 operation via AWS CLI? Says aws s3 ls
MacBook-Air:~ user$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
You are correct that, when running applications on Amazon EC2 instances or as AWS Lambda functions, an IAM role should be assigned that will provide credentials via the EC2 metadata service.
If you are not running on EC2/Lambda, then the normal practice is to use IAM User credentials that have been created specifically for your application, with least possible privilege assigned.
You should never store the IAM User credentials in an application -- there have been many cases of people accidentally saving such files into GitHub, and bad actors grab the credentials and have access to your account.
You could store the credentials in a configuration file (eg via aws configure) and keep that file outside your codebase. However, there are still risks associated with storing the credentials in a file.
A safer option is to provide the credentials via environment variables, since they can be defined through a login profile and will never be included in the application code.
I don't think you can use service roles on your personal machine.
You can however use multi-factor authentication for AWS CLI
You can use credentials on any machine not just EC2.
Follow the steps as described by the documentation for your OS.
http://docs.aws.amazon.com/cli/latest/userguide/installing.html