Attach IAM role to multiple EC2 instances - amazon-web-services

There seems to be plenty of documentation that outlines making a role with its corresponding policies and then attaching that to a new or pre-existing (single) EC2 instance. However, when you have many instances and the task it to attach a role to all of those instances, I can't find or figure a way that avoid doing the process one-by-one.
So, how does one attach an IAM role to multiple already-launched EC2 instances efficiently?

You'd have to do this one by one. It would generally be attached at launch but you can do it afterwards.
Programatically looping would probably be the most efficient

There is no way to bulk-assign roles to EC2 instances.
You can do this programmatically using the CLI or the SDK in your language of choice.
If using the CLI you'll want to use the ec2 associate-iam-instance-profile command. Note that this command still just accepts a single instance identifier at a time so you'll need to iterate through a list of instances and invoke repeatedly.

Related

How to generate dynamic tags for every instance launched using EC2 Launch Templates in AWS

I want to tag my ec2 instances with a unique name every time I launch one using Launch Template. However I cannot find a way to do so.
I see solutions where we can tag resources using Lambdas but that solution doesn't really work for instances launched by Launch Templates as far as I can understand.
Is there a way to achieve this? Please help.
There are generally two solutions for auto-tagging instances.
Enable CloudTrial trial and detect run-instances API call. This would be automatically picked up by a CloudWatch Rule which would trigger a lambda function. Details are in Automatically tag new AWS resources based on identity or role.
Setup UserData in your Launch Template, so that the instance tags itself. This will require proper instance role with permissions for that. So its up to your use-case if you want to have such permissions for all instances.

kubernetes: kops and IAMFullAccess policy

According to documentation of both kops and aws, the dedicated kops user needs IAMFullAccess permission to operate properly.
Why is this permission needed?
Is there a way to avoid (i.e. restrict) this, given that it is a bit too intrusive to create a user with such a permission?
edit: one could assume that the specific permission is needed to attach the respective roles to the master(s) and node(s) instances;
therefore perhaps the question / challenge becomes how to:
not use IAMFullAccess
sync with the node creation / bootstrapping process and attach the above roles; (perhaps create a cluster on pre-configured instances? - no idea if kops provides for that)
As far as I understand kops design, it's meant to be end to end tool for provisioning you with k8s clusters. If you want to provision your nodes separately and deploy k8s on them I would suggest to use other tool, such as kubespray or kubeadm:
https://github.com/kubernetes-incubator/kubespray
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

How to create an AWS policy which allows the instances to launch only if it has tags created?

How to create an AWS policy which can restrict the users to create an instance unless they create tags while they try to launch the instance?
This is not possible using an IAM policy alone. The reason being that all EC2 instances are launched without EC2 tags. Tags are added to the EC2 instance after it has launched.
The AWS Management Console hides this from you, but it's a two-step process.
The best you can do is to stop and/or terminate your EC2 instances after-the-fact if they are missing the tags.
Thanks to recent AWS changes, you can launch an EC2 instance and apply tags, all in a single, atomic operation. You can therefore write IAM polices requiring tags at launch.
More details, and a sample IAM policy, can be found at the AWS blog post announcing the changes.

The best way to add post configuration to an ECS Instance

I wonder what is the best way to add a post config step after instance creation when instance are automatically created by an ECS Cluster.
It seems there is no way to add user-data to ECS instance ?
Note : the instance are created automatically by the ECS Cluster itself.
EDIT:
When using ECS, you configure a Cluster. While configuring the cluster you select instance type and other stuff (ssh key, ...) but there is nowhere to give some user-data to the instances that will be created by ECS. So the question is how to do some post-configuration on instances automatically created with ECS.
When using the management console, it's more of a wizard that creates everything needed for you, including the instances using the Amazon Linux ECS optimized AMI, and doesn't give you a whole lot of control beyond that.
To get more fine-grained control, you would have to use another method of creating your cluster, such as the AWS CLI or CloudFormation. These methods allow you (or require you, actually) to create each piece at a time.
Example:
$ aws ecs create-cluster --cluster-name MyEcsCluster
The above command creates you a cluster, and cluster only. You would still have to create an ECS task definition, ECS service—although you could still use the management console for those—and (here's the real answer to your question) the EC2 instances which you want to attach to the cluster (either individually or through an Auto Scaling group). You could create instances from the Amazon Linux ECS optimized AMI, but also add user-data at that time to further configure them (you would also probably use the user-data in this scenario to create the /etc/ecs/ecs.config file to make sure it attaches to the ECS cluster you've created, e.g. echo "ECS_CLUSTER=MyEcsCluster" > /etc/ecs/ecs.config).
The short answer is, it's a more work to gain that sort of flexibility, but it is doable.
Edit: Thinking about it further, you could likely use the management console wizards to create everything once, then manually terminate the instances it created for the cluster (or, rather, delete the Auto Scaling group that creates them) and add your own. This would save you some work.

How to allow an instance to delete old snapshots of an attached volume?

I'm trying to create a cloudformation template that would create a EC2 instance, mount a 2GB volume and do periodic snapshots, while also deleting the ones that are say a week or more old.
While I could get and integrate the access and secret keys, it seems that a signing certificate is required to delete snapshots. I could not find a way to create a new certificate with cloudformation, so it seems like I should create a new user and certificate manually and put that to the template parameters? In this case, is it correct that the user would be able to delete all the snapshots, including the ones that are not from that instance?
Is there a way to restrict snapshot deleting to only the ones with matching description? Or what's the proper way to handle deleting old snapshots?
My recommendation is to create an IAM role (not IAM user) with CloudFormation and assign this role to the instance (again using CloudFormation). The role should be allowed to delete snapshots as appropriate.
One of the easiest ways to delete the snapshot using the IAM role on the instance is to use the boto Python AWS library. Boto automatically finds and uses the correct credentials if you run it on the instance with the assigned IAM role.
Here is a simple boto script I just used to delete snapshot snap-51930522 in us-east-1:
#!/usr/bin/python
import boto.ec2
boto.ec2.connect_to_region('us-east-1').delete_snapshot('snap-51930522')
Alternatively, you might have an external server run the snapshot cleanup instead of running it on the instances themselves. In addition to simplifying credential management and cron job distribution, it also lets you clean up after stopped or terminated instances.