AWS IAM policy to update specific ECS cluster through AWS console - amazon-web-services

We're running a staging env in a separate ECS Fargate cluster. I'm trying to allow an external developers to update tasks and services in this cluster through the AWS Console.
I've created a policy that looks OK for me based on the documentation. Updates through the AWS cli work.
However the AWS Console requires a lot of other, only loosly related permissions. Is there a way to find out which permissions are required? I'm looking at CloudTrail logs but it takes 20 min until somethin shows up. Also I'd like to avoid giving unrelated permissions, even if they are read-only.

Related

AWS - Conditionally run a script on EC2 instances

I am looking for a way to conditionally run a script on every existing / new EC2 instances.
For example, in Azure, you can create an Azure Policy that is executed on every existing / new VM, and when a set of conditions apply on that VM, you can deploy a VM extension or run a DSC script.
I am looking for the equivalent service in AWS.
From AWS Systems Manager Run Command - AWS Systems Manager:
Using Run Command, a capability of AWS Systems Manager, you can remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon Elastic Compute Cloud (Amazon EC2) instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale. You can use Run Command from the AWS Management Console, the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell, or the AWS SDKs.
Administrators use Run Command to perform the following types of tasks on their managed instances: install or bootstrap applications, build a deployment pipeline, capture log files when an instance is removed from an Auto Scaling group, and join instances to a Windows domain.
You will need to trigger the Run Command to execute on nominated EC2 instances. It will not automatically run for every 'new' instance.
Alternatively, there is Evaluating Resources with AWS Config Rules - AWS Config:
Use AWS Config to evaluate the configuration settings of your AWS resources. You do this by creating AWS Config rules, which represent your ideal configuration settings. While AWS Config continuously tracks the configuration changes that occur among your resources, it checks whether these changes violate any of the conditions in your rules. If a resource violates a rule, AWS Config flags the resource and the rule as noncompliant.
For example, when an EC2 volume is created, AWS Config can evaluate the volume against a rule that requires volumes to be encrypted. If the volume is not encrypted, AWS Config flags the volume and the rule as noncompliant. AWS Config can also check all of your resources for account-wide requirements. For example, AWS Config can check whether the number of EC2 volumes in an account stays within a desired total, or whether an account uses AWS CloudTrail for logging.
You can create an AWS Config custom rule that triggers a process when a non-compliant resource is found. This way, an automated action could correct the situation.
You can also use an AWS managed service such as OpsWorks (Managed Chef/Puppet).
This can give you a way of running the commands in an organized way by allowing you to create defined sets of instances and associated resources.

Installing AWS CLI on EC2 instances via Spinnaker/Terraform

Are there any security considerations in terms of installing the AWS CLI by making as part of baking an image AMI?
I can see the following ways in which AWS CLI can be installed:
1. Via baking image (i.e. making AWS CLI as part of Base AMI itself)
2. Via cloud init
3. Install it as pre-requisite just before your service bootstraps.
I see a strong NO (from internal community) on the above for the reason that the AWS instance (spinnaker managed) can do more than just accessing cloud native resources and is very powerful. So In this case, if we tighten the spinnaker IAM role in which it deploys instances, should it be fine?
It really depends on what are you going to do with the AWS CLI in each EC2 baked instance. IS it for debugging purposes? is it part of the functionality of your system?
if is debugging only you can enable AWS CLI but leave an AWS IAM role with minimum permissions attached. You could have a role with special permissions that you can attach to your desired instances, access the instance and perform your debugging actions.
Other than that it really is not recommended that you use AWs CLI or package installation managers inside instances.

Difference between AWS Elastic Container Service's (ECS) ExecutionRole and TaskRole

I'm using AWS's CloudFormation, and I recently spent quite a bit of time trying to figure out why the role I had created and attached policies to was not enabling my ECS task to send a message to a Simple Queue Service (SQS) queue.
I realized that I was incorrectly attaching the SQS permissions policy to the Execution Role when I should have been attaching the policy to the Task Role. I cannot find good documentation that explains the difference between the two roles. CloudFormation documentation for the two of them are here: ExecutionRole and TaskRole
Referring to the documentation you can see that the execution role is the IAM role that executes ECS actions such as pulling the image and storing the application logs in cloudwatch.
The TaskRole then, is the IAM role used by the task itself. For example, if your container wants to call other AWS services like S3, SQS, etc then those permissions would need to be covered by the TaskRole.
Using a TaskRole is functionally the same as using access keys in a config file on the container instance. Using access keys in this way is not secure and is considered very bad practice. I include this in the answer because many people reading this already understand access keys.
ECS task execution role is capabilities of ECS agent (and container instance), e.g:
Pulling a container image from Amazon ECR
Using the awslogs log driver
ECS task role is specific capabilities within the task itself, e.g:
When your actual code runs

Can I use AWS LightSail with AWS CloudWatch?

I've recently started testing out LightSail, but I would like to keep my logging centralized in CloudWatch, but cannot seem to find anything that would enable this. Interestingly LightSail instances do not appear in the EC2 Dashboard. I thought they were just EC2 instances beneath the surface.
I thought they were just EC2 instances beneath the surface.
Yes... but.
Conceptually speaking, you are the customer of Lightsail, and Lightsail is the customer of EC2.
It's as though there were an intermediary between you and AWS. The Lightsail resources are in EC2, but they're not in your EC2. They appear to be owned by an AWS account other than your AWS account, so you can't see them directly.
Parallels for this:
RDS is a "customer" of EC2/EBS. RDS instances are EC2 machines with EBS volumes. Where are they in the console? They aren't there. The underlying resources aren't owned by your account.
In EC2, EBS snapshots are stored in S3. Which bucket? Not one that you can see. EBS is a "customer" of S3. It has its own buckets.
S3 objects can be migrated to the Glacier storage class. Which Glacier vault? Again, not one that you can see. S3 is a "customer" of Glacier. It has its own vaults.
Every API Gateway endpoint is automatically front-ended by CloudFront. Which distribution? You get the idea... API Gateway is a "customer" of CloudFront.
I am not implying in any way that Lightsail is actually a separate entity from AWS in any meaningful sense... I don't know how it's actually organized... but operationally, that is how it works. You can't see these resources.
It's possible to get it working. The problem is that Lightsail instances are EC2 instances under the hood, but without access to all of the EC2 configuration. The CloudWatch agent documentation explains how to set up IAM roles for EC2 instances to assume, but Lightsail boxes only use a single role which can't be changed and can't be edited. As a result, you need to follow instructions for setting it up as an on-premise server.
The problem you will then hit is as David J Eddy saw in his answer:
2018-10-20T16:04:37Z E! WriteToCloudWatch failure, err: AccessDenied: User: arn:aws:sts::891535117650:assumed-role/AmazonLightsailInstanceRole/i-0788a602f758b836f is not authorized to perform: cloudwatch:PutMetricData status code: 403, request id: b443ecc6-d481-11e8-a551-6d030b8667be
This is due to a bug in the CloudWatch agent which ignores the argument to use on-premise mode (-m onPremise) if it detects it is running on an EC2 instance. The trick is to edit the common-config.toml file to force using a local AWS CLI profile for authentication. You will need to add the following lines to that file (which can be found at /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml on Debian - the installation location is OS dependent):
[credentials]
shared_credential_profile = "AmazonCloudWatchAgent"
Restart the agent and it should start reporting metrics. I've put together a full tutorial here
Running the CloudWatch Agent on Lightsail does NOT work at this time. When the agent attempts to communicate with CloudWatch it receives a 403 from the STS service. Selecting EC2 or OnPremise options during configuration wizards yields the same results.
2018-10-20T16:04:37Z E! WriteToCloudWatch failure, err: AccessDenied: User: arn:aws:sts::891535117650:assumed-role/AmazonLightsailInstanceRole/i-0788a602f758b836f is not authorized to perform: cloudwatch:PutMetricData status code: 403, request id: b443ecc6-d481-11e8-a551-6d030b8667be
Just to make sure, I installed the CloudWatch Agent on my Ubuntu 18.04 desktop and started the agent without error.
Plus, if it did work, why would people pay for EC2 at a higher prices point? CloudWatch is a free value added service for using the full services.

Applying IAM roles to ECS instances

Is there a way to run ECS containers under certain IAM roles?
Basically if you have a code / server that depends on IAM roles to access AWS resources (like S3 buckets or Dynamo tables), when you run that code / server as a ECS container, what will happen? can you control the roles per container?
Update 2: Roles are now supported on the task level
Update: Lyft has an open source thing called 'metadataproxy' which claims to solve this problem, but its been received with some security issues.
When you launch a container host (the instance that connects to your cluster) this is called the container instance.
This instance will have an IAM role attached to it(in the guides it is ecsInstanceProfile I think is the name).
This instance runs the ecs agent (and subsequently docker). The way this works is when tasks are run, the actual containers make calls to/from AWS services, etc. This is swallowed up my the host (agent) since it is actually controlling the network in/out of the docker containers. This traffic in actuality now is coming from the agent.
So no, you cannot control on a per container basis the IAM role, you would need to do that via the instances (agents) that join the cluster.
Ie.
you join i-aaaaaaa and it has the ECS IAM policy + S3 read only to cluster.
you join i-bbbbbbb and it has the ECS IAM policy + S3 read/write to cluster.
You launch a task 'c' that needs r/w to S3. You'd want to make sure it runs on i-bbbbbb