I have a container running on EC2 which is currently running in a public VPC ( It cannot be changed right now) and, in order for this resource to access DynamoDB, I have created a user, limiting its access to my table in Dynamo and then I created access keys to use in my API calls.
My idea is store these secrets in secret manager and use its SDK from my EC2 to then perform the operations I want to.
However, it just seem like a lot of effort and, creating a specific user just to limit the permissions does not seem right for me.
Am i in the right way? What would be the most interesting approach to access the Dynamo programmatically from my EC2 ?
I have read somewhere that I could grant role permissions so my EC2 could access my Dynamo.
Does that make sense?
Note: I have an ECS working along my EC2
I am new to AWS and used to work a lot with Azure but mostly with serverless applications where I could easily used the Identity Management feature to grant those permissions.
The details were all mentioned above.
I think it would be better to create an instance-profile, define it with the permissions you want for dynamodb, which is pretty much like an iam role and then when you start the instance, use that role. That means, you do not need to store credentials and this is generally the recommended way to access services from an instance over using access keys.
Ref: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2_instance-profiles.html
I have done some search and found this article, it matches exactly your case (EC2 + Dynamo DB)
https://awstip.com/using-aws-iam-roles-with-ec2-and-dynamodb-7beb09af31b9
And yes for EC2, the correct approach is to create an IAM role and attach to your instance
Also the following command can be used to retrieve the associated credentials (AWS Key + AWS Secret) that are used by that IAM role
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<name-of-iam-role>
Related
I would like to run a batch job on-prem and access AWS resources in our account.
I think the recommendation is to create an IAM user, which will be a machine user. Since I don't have a way to assign a role to the on-prem machine, or federate with AWS identity, I'll create an access key and install it on the on-prem machine. What's the best way to link my machine user to a policy?
I can create an IAM policy which allows the required actions (reading AWS SSM Parameters).
But, how should I link the machine user to the policy? I'm setting up these users/policies with Pulumi. Some options I'm aware of:
I can create a role, but then I think the machine user would have to assume the role. (My understanding is that roles do not have immediate "membership", it's just that users have the ability to assume roles. Or, AWS infrastructure can be set up with a role, like an EC2 or an EKS cluster can act as a role. In the future I do plan to move this job's execution to AWS infrastructure, but for now that's not an option.) Is assuming a role easy, for example a aws sts CLI call that I could put in my batch job's startup script before calling the main binary?
Or I could just attach the policy directly to the machine user. Generally that's not recommended from what I've read: you should have a layer between users and policies so when users change what they're doing you have indirection. But in this case maybe that's fine.
Or finally I could create a user group, attach the policy to the group, and add the machine user as a member of the group. Is that layer of indirection useful / an appropriate use of groups, especially if I'm already managing these policies with IaC? Most documentation recommends roles for the user-to-policy indirection, so I'm hesitant to use groups that way. However, that seems to be the expected approach for human users (glad for feedback on that too).
"Is it better to use AWS IAM User Group, or IAM Role for users to assume?" says a group would help manage permissions for multiple users (but so does Pulumi and I only have 1 or 2 machine users); and a role would help separate access rights from long-lived credentials but it seems like rotating the machine user's access key would have that benefit too without the extra assume-role step.
I am planning to use DynamoDB for the first time for my project. I initially made connection to DynamoDB from my Java application using the IAM User secret keys. But, then decided to add permissions to the IAM Role of my server, where the application runs.
Am doing it right? What's the best practice for this?
And if IAM Role is the right way to go, how do I handle applications connecting from my AWS Workspace ( my dev environment ), can I add IAM Role for that too?
IAM Role is the correct way to go. You create a role following the least permissions privilege. This means that you assign to the role only the absolutely necessary permissions. In your case the role should only have access to specific DynamoDB Tables and Indexes.
In EC2, lambda functions and in general in AWS environment you assign this role. The service you are using, will assume this role and be able to access DynamoDB. No need to create access keys.
For your local DEV environment (outside of AWS), you should create a user, assign the role you've created and create a Key Id and a Secret. This way your local environment will only have access to the needed resources.
If you also need your personal AWS credentials in a local machine, you can use profiles to manage them.
Handling creds when using the AWS SDK for Java is explained in the AWS Java Developer Guide in these topics:
Get started with the SDK for Java
Using credentials
This guide explains best practices.
I have a scaling group of several EC2 instances.
I have API keys which I would like to distribute to the instances using round-robin.
How can I code the instances to get the credentials once they go live?
Is there an AWS service for that?
It is not AWS credentials which could be solved by defining IAM Roles.
Thanks
Use "user data" option when you start your EC2 instance, You can run the bash script.
I recommend the following step.
1-put your cred or other shared information to S3 or dynamoDB.
2-write script to read and setting this data when your EC2 was starting.
The closest thing AWS has to this is called IAM Roles. A role includes a set of IAM permissions (like an IAM user). When you start a VM, you can set the role of the VM. The VM can then call the AWS API and get temporary credentials that give it access to the services that are defined in the IAM role.
See here for more details:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
This does not exactly meet your requirement for round-robin credentials distribution. But it might be a better option. IAM roles are as secure a method of distributing credentials to EC2 instances as you can get.
AWS now provides two services that could be used for that purpose:
The Secrets Manager would seem to be the most fitting, but does cost money from the start.
The Parameter Store is also an option and is free for up to 10k parameters.
I am exploring amazon IAM Roles. I want to know how can we apply IAM roles to an ec2 instance to access an application.
Any lead is highly appreciated.
Thanks
You can attach a Role to an Instance to provide this instance with specific permissions to use AWS API.
For example : You deploy a Java application on Tomcat and you want your application to use DynamoDB or S3 ... you need an ACCESS KEY and SECRET KEY with proper permission. How would your application get these ? A configuration file ? Burned into the AMI ? Stored in a database ? ... none of these are secure and manageable at large scale.
This is where Role kicks in.
you define a role in IAM and attach a couple of permission to it.
when you create the instance, you attach the role (you can not do that at a later time !)
from the instance, a private web service will give access to temporary ACCESS KEY and SECRET KEY, limited to the permissions specified in the role.
The best part is that AWS SDK are knowing about that and can dynamically and automatically get the keys for you.
Check out the doc for more details : http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.