I have a scaling group of several EC2 instances.
I have API keys which I would like to distribute to the instances using round-robin.
How can I code the instances to get the credentials once they go live?
Is there an AWS service for that?
It is not AWS credentials which could be solved by defining IAM Roles.
Thanks
Use "user data" option when you start your EC2 instance, You can run the bash script.
I recommend the following step.
1-put your cred or other shared information to S3 or dynamoDB.
2-write script to read and setting this data when your EC2 was starting.
The closest thing AWS has to this is called IAM Roles. A role includes a set of IAM permissions (like an IAM user). When you start a VM, you can set the role of the VM. The VM can then call the AWS API and get temporary credentials that give it access to the services that are defined in the IAM role.
See here for more details:http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
This does not exactly meet your requirement for round-robin credentials distribution. But it might be a better option. IAM roles are as secure a method of distributing credentials to EC2 instances as you can get.
AWS now provides two services that could be used for that purpose:
The Secrets Manager would seem to be the most fitting, but does cost money from the start.
The Parameter Store is also an option and is free for up to 10k parameters.
Related
I was reading Default VPC and Default Subnets - Amazon Virtual Private Cloud about default VPC creation by AWS. Under default VPC components, it states "Amazon creates the above VPC components on behalf of the customers. IAM policies do not apply to those actions because the customers do not perform those actions".
My question is, we need to create an IAM role for an AWS service to call another AWS service, e.g., EC2 invoking S3, but why does IAM policy not work when the AWS builds resources on our behalf?
Thanks in advance for any input.
In your example of Amazon EC2 connecting to Amazon S3, it is actually your program code running on an Amazon EC2 instance that is making calls to Amazon S3. The API calls to S3 need to be authenticated and authorized via IAM credentials.
There are also situations where an AWS service calls another AWS service on your behalf using a service-linked role, such as when Amazon EC2 Auto Scaling launches new Amazon EC2 instances. This requires provision of a Service-Linked Role for Amazon EC2 Auto Scaling, which gives one service permission to call another service.
In the case of creating a Default VPC, this is something that AWS does before an account is given to a customer. This way, customers can launch resources (eg an Amazon EC2 instance) without having to first create a VPC. It is part of the standard account setup.
It appears that AWS has also exposed the CreateDefaultVpc() command to recreate the Default VPC. The documentation is saying that permission to make this API call is sufficient for creating the resources, without requiring permissions for each underlying call that it probably generates. I guess it is using the permissions that would normally be associated with a service-linked role, except that there is no service-linked role for VPC actions. If you are concerned about people creating these resources (eg an Internet Gateway), you can deny permissions on users for being able to call CreateDefaultVpc(), which will prevent them from using the command.
Think of our AWS account as the "root" and AWS essentially has a "super root" account that they can trigger the initial creation of your account with. This all occurs when your account is initially set up and configured since they have that "super root" level of access just as part of being the product owners.
We are limited (and I assume AWS is limited in a different way) by IAM to allow us to use the Principle of Least Privilege
According to many advices, we should not configure IAM USER but using IAM Role instead to avoid someone managed to grab the user confidential in .aws folder.
Lets say I don't have any EC2 instances. Can I still able to perform S3 operation via AWS CLI? Says aws s3 ls
MacBook-Air:~ user$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
You are correct that, when running applications on Amazon EC2 instances or as AWS Lambda functions, an IAM role should be assigned that will provide credentials via the EC2 metadata service.
If you are not running on EC2/Lambda, then the normal practice is to use IAM User credentials that have been created specifically for your application, with least possible privilege assigned.
You should never store the IAM User credentials in an application -- there have been many cases of people accidentally saving such files into GitHub, and bad actors grab the credentials and have access to your account.
You could store the credentials in a configuration file (eg via aws configure) and keep that file outside your codebase. However, there are still risks associated with storing the credentials in a file.
A safer option is to provide the credentials via environment variables, since they can be defined through a login profile and will never be included in the application code.
I don't think you can use service roles on your personal machine.
You can however use multi-factor authentication for AWS CLI
You can use credentials on any machine not just EC2.
Follow the steps as described by the documentation for your OS.
http://docs.aws.amazon.com/cli/latest/userguide/installing.html
I'm just getting in touch with EC2 and came across the IAM Role concept. This question is to clear my doubt about the concept on restriction level.
Lets say I have an EC2 instance with attached IAM Role Role A which posses one policy AmazonS3ReadOnlyAccess, correct me if i'm wrong but it means this particular instance only allow to perform S3 Read only operation.
Now says I created a User with Programmatic access and AmazonS3FullAccess policy.
If this user SSH into the EC2 instance, can he write file to s3 ?
I still unable to try it out by myself as I don't have a linux machine and still figuring on how to connect to Ec2 using putty
Lets say I have an EC2 instance with attached IAM Role Role A which
posses one policy AmazonS3ReadOnlyAccess, correct me if i'm wrong but
it means this particular instance only allow to perform S3 Read only
operation.
Yes
Now says I created a User with Programmatic access and
AmazonS3FullAccess policy. If this user SSH into the EC2 instance, can
he write file to s3 ?
IAM users cannot SSH to EC2 instances using IAM user credentials. After provisioning a EC2 instance, you need to use regular Operating System User constructs, to SSH to the Server (Default user keys created by AWS).
In addition if a user SSH to EC2 instance and use a Programatic Access Credentials of a EC2 User through AWS CLI, REST API or SDKs (Doesn't have to be a EC2 instance, it also can be your on-premise server) then if the IAM User has a S3 write policy, the CLI commands or API calls or the code using SDK is able to write files to S3.
So in a summary
Use IAM roles if you are running a EC2 instance, for your CLI commands, Code with SDK, or REST API calls to access AWS Resources.
If you are using a server on-premise or outside AWS, use IAM User's Programatic Access keys to do the same.
Insight on how IAM roles work internally with EC2
When you attach a IAM role to a EC2 instance, AWS periodically updates the EC2 instance with temporal Access Credentials to that EC2 instance (Which is a good security practice).
These credentials are accessible through the Metadata URLs for the CLI, REST API and Code using SDKs inside EC2 instance.
Note: When using Roles, its much secure since it uses Temporal access credentials vs IAM Users Programatic Access uses Long lived access credentials.
I am trying to setup spinnaker locally to manage AWS EC2 instances. The current documentaion depicts the steps which need to have spinnaker instance to be running on EC2. They are creating one role and attaching it to spinnaker instance. As I am running spinnaker in my local environment, I am finding a way which will allow my local spinnaker instance to access the AWS resources. Will it be possible to have one such policy/role ? May be using AWS-STS ( Security Toke Service ), but i dont know how to use that creds with spinnaker instance
You can do this directly by creating an IAM user with required policies to access AWS Resources and use the Programmatic access Credentials in your local machine to use AWS CLI, API or SDKs.
For an existing IAM User, the step are as follows.
IAM User -> Security Credentials -> Create Access Keys
Note: If you cannot trust your local environment, then you can use AWS STS service (For this you need to implement a separate service, where you can pass user credentials and request for a temporal token from AWS STS)
You can create the IAM role for your local machine to assume, like
this example, or stricter,
spinnaker will handle the STS assume role given its configured properly
as for the temporary credential, if what you mean is MFA compatibility, I am myself still figuring out the way to do it. I think one workaround is to create a wrapper script that call sts:assumeRole, ask the user to provide the MFA token, then set AWS_ACCESS_KEY, AWS_SECRET_KEY, and AWS_SESSION_TOKEN which will be honored by clouddriver, but then deployment to multiple AWS accounts will be a problem
I am in the early stages of writing an AWS app for our users that will run our research algorithms using their AWS resources. For example, our code will need to spin up EC2 instances running our 'worker' app, access RDS databases, and create access SQS queues. The AWS Java SDK examples (we are writing this in Java) use a AwsCredentials.properties file to store the Access Key ID and Secret Access Key, which is fine for examples, but obviously not acceptable for our users, who are would be in essence giving us access to all their resources. What is a clean way to go about running our system on their behalf? I discovered AWS Identity and Access Management (IAM) which seems to be for this purpose (I haven't got my head around it yet), esp. Cross-account access between AWS accounts. This post makes it sound straightforward:
Use the amazon IAM service to create a set of keys that only has
permission to perform the tasks that you require for your script.
http://aws.amazon.com/iam/
However, other posts (e.g., Within IAM, can I restrict a group of users to access/launch/terminate only certain EC2 AMIs or instances?) suggest there are limitations to using IAM with EC2 in particular.
Any advice would be really helpful!
The key limitation with regards to RDS and EC2 is that while you can restrict access to certain API actions there are no resource level constraints. For example with an IAM S3 policy you can restrict a user to only being able to perform certain actions on certain buckets. You can write a policy for EC2 that says that user is allowed to stop instances, but not one that says you can only stop certain instances.
Another option is for them to provide you with temporary credentials via the Security Token Service. Another variant on that is to use the new IAM roles service. With this an instance has a set of policies associated with it. You don't need to provide an AwsCredentials.proprties file because the SDK can fetch credentials from the metadata service.
Finally one last option might be consolidated billing. If the reason you are using their AWS resources is just because of the billing, then setup a new account which is billed from their account. The accounts are isolated from each other so you can't for example delete their instances by accident. Equally you can't access their RDS snapshots and things like that (access to an RDS instance via mysql (as opposed to the AWS api) would depend on the instance's security group). You can of course combine this with the previous options - they could provide you with credentials that only allow you to perform certain actions within that isolated account.