AWS Policies explained? - amazon-web-services

I am learning AWS and I have the following task in an online training course:
Configure the MongoDB VM as highly privileged – configure an instance
profile to the VM and add the permission “ec2:*” as a custom policy.
I am trying to work out what that means. Is the task asking for a role that enables the VM instance to have full control over all EC2 resources?
If I understand it correctly, then I think the following policy would implement it.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*"
],
"Effect": "Allow",
"Resource": "arn:aws:ec2:*:*:instance"
}
]
}
My understanding is that this policy is saying any EC2 instance can perform any EC2 action. Is that right?

I would say you are almost correct. Roles are attached to individual services which means your particular VM can perform any Ec2 action on this resource arn:aws:ec2:*:*:instance.
There is a difference in saying any ec2 can perform ec2 action instead that ec2 instance can perform any ec2 action to which this role is attached.

Related

Does anyone know where this goes in the instances?

{
"Sid": "ElasticBeanstalkHealthAccess",
"Action": [
"elasticbeanstalk:PutInstanceStatistics"
],
"Effect": "Allow",
"Resource": [
"arn:aws:elasticbeanstalk:*:*:application/*",
"arn:aws:elasticbeanstalk:*:*:environment/*"
]
}
That's a part of the IAM profile for the elastic beanstalk instance.
If you choose AWSElasticBeanstalkWebTier or AWSElasticBeanstalkWorkerTier as IAM Instance profile, the ElasticBeanstalkHealthAccess permissions will be added already.
See https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html
There are two IAM roles associated with an Elastic Beanstalk Environment:
Service role: used to manage the environment
Instance role: role assumed by the running application. It is used to provide access to other AWS services.
You need to find your instance role in IAM console and attach the permission that you see in the documentation. This will allow your application to send statistics.

Permissions to READ EKS cluster from EC2

I have an EKS cluster and EC2. I would like to create an instance profile and attach it to the EC2 - this profile should allow ONLY READ access to the EKS cluster.
Will the following policy be apt for this requirement?:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"eks:ListNodegroups",
"eks:DescribeFargateProfile",
"eks:ListTagsForResource",
"eks:ListAddons",
"eks:DescribeAddon",
"eks:ListFargateProfiles",
"eks:DescribeNodegroup",
"eks:ListUpdates",
"eks:DescribeUpdate",
"eks:AccessKubernetesApi",
"eks:DescribeCluster",
"eks:ListClusters",
"eks:DescribeAddonVersions"
],
"Resource": "*"
}
]
}
Depends on what you mean by "read" access.
If you mean "read" from AWS perspective as in being able to use the AWS CLI to tell you about EKS, yes that would be sufficient to get you started. This will not include any kubectl commands.
But if you mean read as in being able to execute kubectl commands against the cluster, then you will not be able to achieve that with this.
To implement read access to the cluster itself using kubectl commands, your EC2 instance will need a minimum of the following IAM permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
With this, your EC2 will be able to execute eksctl utils write-kubeconfig --cluster=cluster-name to configure the kubeconfig. This also assumes you have the required components installed on your EC2 to run kubectl.
You also need to set up permissions within your cluster because the IAM permissions alone don't actually grant any visibility within the cluster itself.
The role you assign to your EC2 would need to be added to the aws-auth configmap in the kube-system namespace. See Managing users or IAM roles for your cluster from AWS docs for examples.
Unfortunately I don't believe there's a simple out-of-the-box RBAC role you can use that gives you read-only access to the entire cluster. Kubernetes provides four user-facing roles and of them, only the system:masters group has cluster-wide access.
Have a look at Kubernetes Using RBAC Authorization documentation - specifically on user-facing roles.
You will need to design a permission strategy to fit your needs, but you do have the default role view that you can start from. The default view user-facing role is tied to a ClusterRoleBinding and was designed / intended to be used in a namespace specific capacity.
Permissions and RBAC for Kubernetes is a very deep rabbit-hole.

connecting Docker to a cloud provider, Amazon AWS

Context: I was going though Link to Amazon Web Services to create Swarms, in order to connect to my provider.
The role was created with success.
Then, while creating the policy, to associate to the role, a problem happened.
Problem:
An error occurred: Cannot exceed quota for PolicySize: 5120
As suggested by them, this is what I need to add in policy:
https://docs.docker.com/docker-for-aws/iam-permissions/
Did some research and people seem to like this solution:
https://github.com/docker/machine/issues/1655
How can I create the policy using the best method?
Noticing that the documentation in Docker is wrong - doesn't work in my case - what's the best method?
You are looking at the wrong instructions to connect docker-cloud to AWS follow these instructions: https://docs.docker.com/docker-cloud/infrastructure/link-aws/
It's the following 3 steps
Create AWS Policy for docker-cloud
Create a docker-cloud role and attache the policy from 1
Attach AWS role/account to docker-cloud
The policy in (1) above is pretty simple. It should be allowed to perform ec2 instances related actions (your screenshot of the policy looks like it doesn't provide ec2 permissions):
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"iam:ListInstanceProfiles"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
The role must have the permissions to implement the policy.
For a detailed post on the deployment via docker-cloud see: https://blog.geografia.com.au/how-we-are-using-docker-cloud-for-automated-testing-and-deployments-of-applications-bb87ec3173e7

Error registering: NoCredentialProviders: no valid providers in chain ECS agent error

Im trying to use EC2 Container service. Im using terraform for creating it.
I have defined a ecs cluster, autoscaling group, launch configuration. All seems to work. Except one thing. The ec2 instances are creating, but they are not register in the cluster, cluster just says no instances available.
In ecs agent log on created instance i found logs flooded with one error:
Error registering: NoCredentialProviders: no valid providers in chain
The ec2 instances are created with a proper role ecs_role. This role has two policies, one of them is following, like docs required:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecs:StartTask"
],
"Resource": "*"
}
]
}
Im using ami ami-6ff4bd05. Latest terraform.
It was a problem with trust relationships in the role as the role should include ec2. Unfortunately the error message was not all that helpful.
Example of trust relationship:
{
"Version": "2008-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": ["ecs.amazonaws.com", "ec2.amazonaws.com"]
},
"Effect": "Allow"
}
]
}
Make sure you select the correct ECS role in the launch configuration.
You might want to add AmazonEC2RoleforSSM (or AmazonSSMFullAccess) to your EC2's role.
apparently this error message also occurs when an invalid aws-profile is passed.
I spent 2 days trying out everything without any luck. I have a standard setup i.e. ecs cluster instance in private subnet, ELB in public subnet, NAT and IGW properly set up in respective security groups, IAM role properly defined, standard config in NACL, etc. Despite everything the ec2 instances wouldnt register with the ecs cluster. Finally I figured out that my custom VPC's DHCP Options Set was configured for 'domain-name-servers: xx.xx.xx.xx, xx.xx.xx.xx' IP address of my org's internal DNS IPs...
The solution is to have following values for the DHCP Options Set:
Domain Name: us-west-2.compute.internal (assuming your vpc is in us-west-2),
Options: domain-name: us-west-2.compute.internal
domain-name-servers: AmazonProvidedDNS
I got this error today and figured out the problem: I missed setting the IAM role in launch template (it is under Advanced section). You need to set it to ecsInstanceRole (this is the default name AWS gives - so check if you have changed it and use accordingly).
I had switched from Launch Configuration to Launch Template, and while setting up the Launch Template, I missed adding the role!
if you use taskDefinition , check that you set execution & taskRole ARN's and set correct policies for that roles.

Is it possible to enable CloudWatch on a running EC2 instance?

It looks like Amazon has a ready-built IAM role to grant instances CloudWatch write access. ( A more restrictive one could also be created if necessary)
But it appears you cannot attach an IAM role to a running instance.
Am I missing something? Do I really have to re-instantiate my whole fleet to enable CloudWatch? I'm reluctant to save plaintext credentials on each host for security reasons.
I assume you're talking about custom CloudWatch metrics. You don't have to restart any instances to enable them. You can create a group in IAM with the following policy and add a user to this group:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "****************",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
]
}
]
}
Then you basically copy this user's credentials to awscred file and add the perl script to cron. Yes, I had to copy credentials to each machine where custom metrics collection is enabled.
Have you considered simply modifying the existing IAM role to enable writes to CloudWatch? That change should take effect immediately and does not require instance reboot or relaunch.