IMDS in ECS Task without Exposing Instance Role Credentials - amazon-web-services

In our ECS cluster running with EC2 instances we would like our tasks to be able to access the metadata server (169.254.169.254) without exposing the instance role credentials available through metadata on the task:
http://169.254.169.254/latest/meta-data/iam/security-credentials/ <INSTANCE_ROLE_NAME>
I am aware of IMDSv2 but I am not sure how this could solve our problem in this specific case. Another solution could be to simply disable IMDS within tasks but we need to obtain the EC2 instance within it.
Would there be a workaround / solution to our problem that would allow us to benefit from the metadata without exposing the instance role credentials ?

Related

Retrieve existing resource data using AWS Cloudformation

I need to retrieve existing data/properties of a given resource by using an AWS Cloudformation template. Is it possible? If it is how can I do it?
Example 1:
Output: Security Group ID which allows traffic on port 22
Example 2:
Output: Instance ID which use default VPC
AWS CloudFormation is used to deploy infrastructure from a template in a repeatable manner. It cannot provide information on any resources created by any methods outside of CloudFormation.
Your requirements seem more relevant to AWS Config:
AWS Config provides a detailed view of the configuration of AWS resources in your AWS account. This includes how the resources are related to one another and how they were configured in the past so that you can see how the configurations and relationships change over time.
An AWS resource is an entity you can work with in AWS, such as an Amazon Elastic Compute Cloud (EC2) instance, an Amazon Elastic Block Store (EBS) volume, a security group, or an Amazon Virtual Private Cloud (VPC).
Using your examples, AWS Config can list EC2 instances and any resources that are connected to the instances, such as Security Groups and VPCs. You can easily click-through these relationship and view the configurations. It is also possible to view how these configurations have changed over time, such as:
When EC2 instance changed state (eg stopped, running)
When rules changed on Security Groups
Alternatively, you can simply make API calls to AWS services to obtain the current configuration of resources, such as calling DescribeInstances to obtain a list of Amazon EC2 instances and their configurations.

Update terraform resource after provisioning

so I recently asked a question about how to provision instances that depend on each other. The answer I got was that I could instantiate the 3 instances, then have a null resource with a remote-exec provisioner that would update each instances.
It works great, except that in order to work my instances need to be configured to allow ssh. And since they are in a private subnet, I first need to allow ssh in a public instance that will then bootstrap my 3 instances. This bootstrap operation requires allowing ssh on 4 instances that really don't need to once the bootstrap is complete. This is not that bad, as I can still restrict the traffic to known ip/subnet, but I still thought it was worth asking if there was some ways to avoid that problem.
Can I update the security group of running instances in a single terraform plan? Example: Instantiate 3 instances with security_group X, provision them through ssh, then update the instances with security_group Y, thus disallowing ssh. If so, how? If not, are there any other solutions to this problem?
Thanks.
Based on the comments.
Instead of ssh, you could use AWS Systems Manager Run Command:
AWS Systems Manager Run Command lets you remotely and securely manage the configuration of your managed instances. Run Command enables you to automate common administrative tasks and perform ad hoc configuration changes at scale.
This would require making your instances to be recognized by AWS Systems Manager (SSM) which requires three things:
network connectivity to SSM service. Since your instances are in private subnet, they either have to connect to the SSM service using NAT gateway or VPC interface endpoints for SSM.
SSM Agent installed and running. This is usually not an issue as most offical AMI on AWS already have it setup.
Instance role with AmazonSSMManagedInstanceCore AWS managed policy.
Since run-command is not supported by terraform, you either have to use local-exec to run the command through AWS CLI, or through lambda function using aws_lambda_invocation.

Unable to SSH into AWS EC2 instance with instance metadata turned off

I am not able to SSH into a EC2 instance if it is launched with the instance metadata service is turned off.
ec2.runInstances({ ... MetadataOptions: {
HttpEndpoint: 'disabled'..
})
This however is not an issue if I launch with the MetadataOptions enabled and disable it with a modify-instance-metadata-options call after the instance has finished starting up. Is this documented behaviour? I couldn't find it explicitly mentioned in the documentation anywhere.
Note - this is not a security group, Network ACL, etc issue.
I noticed this too. It seems that disabling IMDS breaks all of the following:
SSH access is broken; the authorized_keys file for the default user (ie root or ubuntu) is not populated because the EC2 Key Pair is normally provided in instance metadata.
Cloud-init/Cloud-config (aka "userdata") do not run. The user data is normally made available at http://instance-data.:8773 but this is unavailable when IMDS is disabled.
Therefore, if your desire is to disable IMDS from the moment of launch, it seems the only viable workaround is to create your own AMI that has your own configuration (ie. SSH authorized_keys) backed into it. Packer is commonly used for building AMIs in this way.
An alternate approach would be to give the EC2 instance profile permission to call ModifyInstanceMetadataOptions conditionally scoped the instance can only affect itself, then call aws ec2 modify-instance-metadata-options --http-endpoint disabled at the end of the setup script. Once an instance locks itself out this way, it cannot unlock itself because it'll no longer have access to STS tokens via the IMDS endpoint.

How to create an AWS policy which allows the instances to launch only if it has tags created?

How to create an AWS policy which can restrict the users to create an instance unless they create tags while they try to launch the instance?
This is not possible using an IAM policy alone. The reason being that all EC2 instances are launched without EC2 tags. Tags are added to the EC2 instance after it has launched.
The AWS Management Console hides this from you, but it's a two-step process.
The best you can do is to stop and/or terminate your EC2 instances after-the-fact if they are missing the tags.
Thanks to recent AWS changes, you can launch an EC2 instance and apply tags, all in a single, atomic operation. You can therefore write IAM polices requiring tags at launch.
More details, and a sample IAM policy, can be found at the AWS blog post announcing the changes.

Applying IAM roles to ECS instances

Is there a way to run ECS containers under certain IAM roles?
Basically if you have a code / server that depends on IAM roles to access AWS resources (like S3 buckets or Dynamo tables), when you run that code / server as a ECS container, what will happen? can you control the roles per container?
Update 2: Roles are now supported on the task level
Update: Lyft has an open source thing called 'metadataproxy' which claims to solve this problem, but its been received with some security issues.
When you launch a container host (the instance that connects to your cluster) this is called the container instance.
This instance will have an IAM role attached to it(in the guides it is ecsInstanceProfile I think is the name).
This instance runs the ecs agent (and subsequently docker). The way this works is when tasks are run, the actual containers make calls to/from AWS services, etc. This is swallowed up my the host (agent) since it is actually controlling the network in/out of the docker containers. This traffic in actuality now is coming from the agent.
So no, you cannot control on a per container basis the IAM role, you would need to do that via the instances (agents) that join the cluster.
Ie.
you join i-aaaaaaa and it has the ECS IAM policy + S3 read only to cluster.
you join i-bbbbbbb and it has the ECS IAM policy + S3 read/write to cluster.
You launch a task 'c' that needs r/w to S3. You'd want to make sure it runs on i-bbbbbb