I am looking for ways to automate the rotation of access keys (AWS credentials) for a set of users. There is a seperate process that creates the Access Keys. I need to be able to rotate the keys in an automated way. This link explains a way to do this for a specific user. How would I be able to achieve this for a list of users. Any thoughts or recommendations?
You can use AWS Config to mark the old access keys non-compliant (https://docs.aws.amazon.com/config/latest/developerguide/access-keys-rotated.html) and then use CloudWatch Events (my article how to do this) to run a Lambda function that deletes the old key, creates a new one, then send it to the user.
Access keys are generally used for programmatic access by applications. If these applications are running in, says EC2, you should use roles for EC2. This will install temporary credentials on the instance that are automatically rotated for you. The AWS CLI and SDKs know how to automatically retrieve these credentials so you don't need to add them in the application either.
Other compute solutions (Lambda, ECS/EKS) also have ways to provision roles for applications.
Related
I have a container running on EC2 which is currently running in a public VPC ( It cannot be changed right now) and, in order for this resource to access DynamoDB, I have created a user, limiting its access to my table in Dynamo and then I created access keys to use in my API calls.
My idea is store these secrets in secret manager and use its SDK from my EC2 to then perform the operations I want to.
However, it just seem like a lot of effort and, creating a specific user just to limit the permissions does not seem right for me.
Am i in the right way? What would be the most interesting approach to access the Dynamo programmatically from my EC2 ?
I have read somewhere that I could grant role permissions so my EC2 could access my Dynamo.
Does that make sense?
Note: I have an ECS working along my EC2
I am new to AWS and used to work a lot with Azure but mostly with serverless applications where I could easily used the Identity Management feature to grant those permissions.
The details were all mentioned above.
I think it would be better to create an instance-profile, define it with the permissions you want for dynamodb, which is pretty much like an iam role and then when you start the instance, use that role. That means, you do not need to store credentials and this is generally the recommended way to access services from an instance over using access keys.
Ref: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
I have done some search and found this article, it matches exactly your case (EC2 + Dynamo DB)
https://awstip.com/using-aws-iam-roles-with-ec2-and-dynamodb-7beb09af31b9
And yes for EC2, the correct approach is to create an IAM role and attach to your instance
Also the following command can be used to retrieve the associated credentials (AWS Key + AWS Secret) that are used by that IAM role
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<name-of-iam-role>
I'm trying to set up AWS IAM Identity Center (successor to AWS Single Sign-On) for my organisation, and my team has a strong preference for Infrastructure as Code (IaC) wherever practical.
While exploring solutions, I was able to set up an Instance with several Users, Groups and Permission Sets using the Management Console UI. However, now I have come to set up something more long-term, I can't find any way to create an Instance via either CloudFormation or the AWS CLI.
When looking for documentation, I found the CloudFormation reference for AWS SSO, as well as the AWS CLI reference for the sso-admin subcommand. Neither mention any operations that create instances. Neither does the AWS SSO API reference, which leads me to think programmatic access may not be possible.
Is it possible to create an Instance through code rather than the Management Console?
If it is possible, what have I missed?
How to configure the credential to use AWS service from inside the EKS. I can not use AWS SDK for this specific purpose. I have mentioned a role with required permission in the yaml file but it does not seem like it is picking up the role.
ThankYou
Any help is appreciated.
Typically you'd want to apply some level of logic to allow the pods themselves to obtain IAM credentials from STS. AWS does not currently (its re:Invent now so you never know) provide a native-way to do this. The two community solutions we've implemented are:
kube2IAM: https://github.com/jtblin/kube2iam
kIAM: https://github.com/uswitch/kiam
Both work well in production/large environments in my experience. I prefer kIAM's security model, but both get the job done.
Essentially the work the same basic way ... intercepting (for lack of a better word) communications b/t the SDK libraries in the container and STS, matching identity of the pod with an internal role dictionary, and then obtaining STS credentials for that role and handing those creds back to the container. The SDK isn't inherently aware its in a container, its just doing what it does anywhere ... walking its access tree until it sees the need to obtain creds from STS and receiving those.
Can AWS IAM be used to control access for custom applications? I heavily rely on IAM for controlling access to AWS resources. I have a custom Python app that I would like to extend to work with IAM, but I can't find any references to this being done by anyone.
I've considered the same thing, and I think it's theoretically possible. The main issue is that there's no call available in IAM that determines if a particular call is allowed (SimulateCustomPolicy may work, but that doesn't seem to be its purpose so I'm not sure it would have the throughput to handle high volumes).
As a result, you'd have to write your own IAM policy evaluator for those custom calls. I don't think that's inherently a bad thing, since it's also something you'd have to build for any other policy-based system. And the IAM policy format seems reasonable enough to be used.
I guess the short answer is, yes, it's possible, with some work. And if you do it, please open source the code so the rest of us can use it.
The only way you can manage users, create roles and groups is if you have admin access. Power users can do everything but that.
You can create a group with all the privileges you want to grant and create a user with policies attached from the group created. Create a user strictly with only programmatic access, so the app can connect with access key ID and secure key from AWS CLI.
Normally, IAM can be used to create and manage AWS users and groups, and permissions to allow and deny their access to AWS resources.
If your Python app is somehow consuming or interfacing to any AWS resource as S3, then probably you might want to look into this.
connect-on-premise-python-application-with-aws
The Python application can be upload to an S3 bucket. The application is running on a server inside the on-premise data center of a company. The focus of this tutorial is on the connection made to AWS.
Consider placing API Gateway in front of your Python app's routes.
Then you could control access using IAM.
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.