I have been working for the past week using Access and Secret keys that I generated for connecting REST API to DynamoDB and AWS CLI, today I just got told by the offshore team that I am not supposed to use Access and Secret keys at all that I'm supposed to use IAM roles and I have been researching how to do that but I'm stuck, has anyone here ever had the same issue?
If everything was done the way you said it in the question and also in your comment reply to #stdunbar, it is impossible to do so, those are the purpose of both secret and access keys, i dont think your offshore team knows what they are talking about
There are methods to acquire STS session keys (like when you assume a role) from AWS. One solution is Hashicorp Vault, but this requires the infrastructure has been configured to allow this. There are other methods that use the webui session to generate an STS token.
Ask your offshore team what method you should use to get a role based access session token. You were probably used to the cli asking for Access Key ID and Secret Key. The session key will come in three parts instead of two. The session access key id will start with ASIA instead of AKIA; the session secret access key is the same as its static counterpart; the session token is a very long string.
The easiest way to set these are to edit the credentials file in .aws/credentials. If you use aws configure you won't be prompted to set the session token. you could use aws configure set for each of the parts, if you don't already have profiles set up in your credential file you can just edit the default credential profile.
source:https://docs.aws.amazon.com/cli/latest/reference/configure/index.html
The point that they're (correctly) making is that your application should not include explicit credentials.
Instead, the application should be configured in Elastic Beanstalk with an IAM role. When the application runs and uses an AWS SDK, the SDK will be able to retrieve temporary credentials from the Beanstalk environment that it is running on.
You can read more at Managing Elastic Beanstalk Instance Profiles.
Related
I have an AWS account with full access to DynamoDB.
I am writing an application that uses DynamoDB. I would like to test this application backed by the real DynamoDB (and not any local compatible/mock solution). However, the test application is not as secure a real production-ready application, and there is a real risk that during my tests an attacker may break into the test machine. If my real AWS credentials (needed to write to DynamoDB) are on that machine, they may be stolen and the attacker can basically do anything that I can do on DynamoDB - e.g., create expensive VMs in my account next week and mine for bitcoin.
So I'm looking for an alternative to saving my real AWS credentials (access key id and secret access key) on the test machine.
I read about Amazon's signature algorithm v4, and it turns out that its signature process is actually two-staged: First a "signing key" is calculated from the full credentials and this signing key works only for a single day on a single service - and then this "signing key" is used to sign the individual messages. This suggests that I could calculate the signing key on a secure machine and send it to the test machine - and the test machine will only do the second stage of the signature algorithm, and will only be able to use DynamoDB and only for a single day.
If I could do this, this would solve my problem, but the problem is that I couldn't figure out how I can tell boto3 to only do the second stage of the signing. It seems it always takes the full credentials aws_access_key_id and aws_secret_access_key - and does both stages of the signature. Is there a way to configure it to only do the second stage?
Alternatively, is there a different way in AWS or IAM or something, where someone like me that has credentials can use them to create temporary keys that can be used only for a short amount of time and/or only one specific service?
create temporary keys that can be used only for a short amount of time and/or only one specific service
Yes, that's why AWS STS service exists. Specifically you can use GetSessionToken which:
Returns a set of temporary credentials for an AWS account or IAM user. The credentials consist of an access key ID, a secret access key, and a security token.
You can also create IAM roles, and used STS's AssumeRole for the same thing. Actually using IAM roles for instances is the most preferred way to give temporary permissions for the applications on EC2 instances. This way you don't have to use your own credentials at all.
We're using a AWS Access Key/Secret pair for S3 uploads on an old client website. During an audit, we discovered that the Access Key used for uploads, while still working, doesn't appear to exist in any IAM user for the client's AWS account. I ran aws sts get-access-key-info --access-key-id=[old key] and it provided the correct AWS account id for our client. But searching for this key in our IAM users (https://console.aws.amazon.com/iam/home?#/users) shows no results. How can this be? Can the Access Key live somewhere else outside IAM?
Accepted answer from #kdgregory
It could belong to the root user. If yes, then you very much want to disable it. – kdgregory 2 hours ago
AFAIK AWS Access Key/Secret you can only secure and store once you created the IAM user. I don't think you can get the Access Key and Secret pair again anywhere in AWS unless you have it stored somewhere (which is not a good practice BTW).
I am starting into the AWS world and I recently configured my local environment to connect to my AWS account through the terminal, but I’m having a hard time finding the correct command to log in. Could someone please point me how to do this.
Thank you beforehand
The AWS CLI does not "log in". Rather, each individual request is authenticated with a set of credentials (similar to a username and password). It's a bit like making a phone call -- you do not need to "log in" to your telephone. Instead, the system is configured to already know who you are.
To store credentials for use with the AWS CLI, you can run the aws configure command. It will prompt you for an Access Key and Secret Key, which will be stored in a configuration file. These credentials will then be used with future AWS CLI commands.
If you are using your own AWS Account, you can obtain an Access Key and Secret Key by creating an IAM User in the Identity and Access Management (IAM) management console. Simply select programmatic access to obtain these credentials. You will need to assign appropriate permissions to this IAM User. (It is not recommended to use your root login for such purposes.)
to login using shell you will need:
IAM keys (https://aws.amazon.com/premiumsupport/knowledge-center/create-access-key/)
AWS Cli (https://aws.amazon.com/cli/)
lets say I have a on-premise application that needs to access various AWS services such as S3, Cloudwatch etc. What is the correct way to handle this authentication? I have read recommendations to create a new iam role and then distribute the AWS keys on the server that the application runs. But wouldn't this be very bad practice in case the keys gets stolen or exposed in some way? It would also be more work to rotate credentials for example. Is it possible to assign roles in some other ways or this is the correct way to do it? Isn't it better to assign roles or that isn't possible when not running the app in AWS?
Creat an IAM user with “Programmatic Access” only, which will provide you with a key and secret pair.
As a general rule, your application can use one set of credentials to get another, more privileged set of credentials. The app must be able to authenticate somehow so it needs some basic form of service account credentials to start with.
One way you can do this is to create an IAM user with minimal privileges. This IAM user is able to assume a specific IAM service role, but nothing else. That service role actually confers permissions to interact with S3, CloudWatch etc. Your application is configured with, or somehow securely retrieves, the credentials associated with the IAM user. Your application then uses these to call STS and assume the IAM service role, getting back short-lived STS credentials (access key, secret key, and session token). You should leverage the additional 'external ID' with the IAM role, as one more security factor.
Your application is also responsible for getting a new set of credentials before the existing set expires. You can do that in a number of ways, for example by using new STS credentials for every single request you make (so they never expire) or simply paying attention to the credentials expiration time and refreshing prior.
Also, read Temporary Credentials for Users in Untrusted Environments.
If your application is running on an Amazon EC2 instance and it is the only application on that instance, then:
Create an IAM Role
Assign the appropriate permissions to the Role
Assign the IAM Role to the EC2 instance
Any software running on the instance will automatically have access to credentials to access AWS. These credentials automatically rotate every 6 hours.
If you are not running on an EC2 instance:
Create an IAM User
Assign the appropriate permissions to the User
Generate credentials for the User (Access Key, Secret Key) and store them in a credentials file on the computer being used by the application
Any software running on the instance will automatically have access to these credentials to access AWS.
I've been looking in to getting the AWS (web) console hooked up to an AD or ADFS setup for managing users. It was reasonable easy to get working with a SAML Identity Provider in IAM and some existing ADFS infrastructure.
The problem is that users that authenticate that way, as opposed to normal AWS user accounts, don't have any way to have associated access keys so far as I can tell. Access keys are a key concept for authenticating stuff such as the AWS CLI, which needs to be tied to individual user accounts.
What are the workarounds to allow a user authenticated via a SAML identity provider to still be able to easily use the aws CLI? The only thing I've come up with to far is some hacky crap that would proxy the aws cli command, request temporary 1-hour credentials from the aws STS service, put them in the aws credentials file, and forward the command to the normal AWS cli. But, that makes me want to throw up a little bit; plus, I have no idea if it would work if a command took over an hour to complete (large s3 uploads, etc..)
Suggestions? I would try the official Directory Service AD connector, but my understanding is users still just assume IAM roles and would ultimately have the same problem.
https://github.com/Versent/saml2aws was created to address this, and has a vibrant open source community behind it.
I've had success with aws-adfs for AWS CLI via ADFS
The repo owner is currently adding support for DUO MFA as well.
It works by authenticating the user to the same page you'd use for console access then scraping the roles available. You choose a role and then aws-adfs sets the default user to the credential set needed for sts access.
After the default user is set you can cli like normal: aws s3 ls
https://github.com/venth/aws-adfs