NOTE: This is about ELBv2 ALBs, not legacy load balancers, ELBv1, but my humble rep won't let me improve the tagging.
I'm attempting to create an AWS ECS Fargate service. I have created Application Load Balancers one for each container in the Task, and since create-service now supports multiple loadbalancers according to the docs - we should be all good, right? Since it's an ALB, I specify targetGroupArn rather than loadBalancerName, and since ECS has a service linked role created by default, AWSServiceRoleForECS I ought to be able to map target groups and go ahead and create the service, right? Not like it used to be when you had to create ecsServiceRole manually from what the internet tells me.
my command is
aws ecs create-service --region $REGION --cluster $CLUSTER --service-name production-svc --task-definition $TASK_ARN --desired-count 2 --launch-type "FARGATE" --network-configuration "awsvpcConfiguration={subnets=[${SUBNET_1}, ${SUBNET_2}],securityGroups=[${SECURITYGROUP_ID}]}" --load-balancers=tgt-a,containerName=a,containerPort=5000,targetGroupArn=tgt-b,containerName=b,containerPort=6000
And the error I get is
An error occurred (InvalidParameterException) when calling the CreateService operation: Unable to assume role and validate the specified targetGroupArn. Please verify that the ECS service role being passed has the proper permissions.
Now, looking though the internet it could either be that my load balancers don't exist (you'll have to trust me, they do - I supply target group ARNs, and those target groups exist as confirmed with aws elbv2 cli and are mapped to valid active application load balancers that all live in the same region as the cluster), or that the service mapped role (the one AWS created for me earlier) won't have enough rights to assume to verify the targetGroupArn.
Do I really have to add rights to the automatic ECS service mapped role? If so - which rights?
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
"elasticloadbalancing:DeregisterTargets",
"elasticloadbalancing:Describe*",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:RegisterTargets",
My Automatic role (that has AmazonECSServiceRolePolicy) already has all of those? Which ones are missing?
I was having the same issue, and can confirm what Ali says:
Turns out that if you are creating the service via CLI with aws ecs create-service you need the full arn for the loadbalancer.
The documentation confirms that:
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ecs/create-service.html
It's a bit confusing because in the CloudFormation template you can just use the "short" name. Also the error message is a bit misleading.
Hope this helps.
Related
I'm trying to connect a spring boot application from AWS EKS to AWS Opensearch both of which reside in a VPC. Though the connection is successful im unable to write any data to the index.
All the AWS resources - EKS and Opensearch are configured using terraform. I have mentioned the elasticsearch subnet CIDR in the egress which is attached to the application. Also, the application correctly assumes the EKS service account and the pod role - which I mentioned in the services stanza for Elasticsearch. In the policy which is attached to the pod role, I see all the permissions mentioned - ESHttpPost, ESHttpget, ESHttpPut, etc.
This is the error I get,
{"error":{"root_cause": [{"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-
hellodemo-role-1,backend_roles=
[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-hellodemo
role-1], requested
Tenant=null]"}],"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld demo-eks-PodRle-
hellodemo-role-1,
backend_roles=[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-
PodRle-hellodemo role-1], requested Tenant=null]"},"status":403}
Is there anything that I'm missing out on while configuring?
This error can be resolved by assigning the pod role to additional_roles key in the Elasticsearch terraform. This internally is taken care by AWS STS when it receives a request from EKS.
While creating AWS EMR cluster, always i get the issue- Service role EMR_DefaultRole has insufficient EC2 permissions
And the cluster terminates automatically, have even done steps as per aws documentation of recreating emr specific roles, but no progress please guide how to resolve the issue- Service role EMR_DefaultRole has insufficient EC2 permissions
EMR needs two roles to start the cluster 1) EC2 Instance profile role 2)EMR Service role. The service role should have enough permissions to provision new resources to start the cluster, EC2 instances, their network etc. There could be many reasons for this common error:
Verify the resources and their actions. Refer https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-iam-role.html.
Check if you are passing the tag that signifies if cluster needs to use emr managed policy.
{
"Key": "for-use-with-amazon-emr-managed-policies",
"Value": "true"
}
At last try to find out the exact reason from cloud trail. Go to aws>cloud trail. From the event history configuration enable the error code so that you can see the exact error. If you find the error code something like 'You are not authorized to perform this operation. Encoded authorization failure message'. Then open the event history details, pick up the encrypted error message and decrypt using aws cli
aws sts decode-authorization-message message. This will show you the complete role details, event, resources, action. Compare it with AWS IAM permissions and you can find out the missing permission or parameter that you need to pass while creating the job flow.
I have multi-account AWS environment (set up using AWS Landing Zone) and I need to copy a specific security group to all the accounts. I do have a CFT written, but it's too much of a repetitive task to do this one by one.
The security group is in the central (shared-services) account, which has access to all the other accounts. It's better if there's a way to integrate this to Account Vending Machine (AVM) in order to avoid future tasks of exporting the SG to newly spawned accounts.
You should use CloudFormation Stacksets. StackSets is a feature of cloudformation in which you have a master account in which you create/update/remove the stackset, and you have children accounts. In the stackset, you configure your children aws accounts you want to deploy the CF template and the region as well.
From your comment, your master account is going to be the shared-services and the rest of your accounts, the children ones. You will need to deploy a couple of IAM roles to allow cross-account access, but after that, you will be able to deploy all your templates in up to 500 aws accounts automatically and update them as well.
More information here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/what-is-cfnstacksets.html
You can export Security Group and other configuration with CloudFormation using CloudFormer, which creates a template from the existing account configuration. Check the steps in this guide https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-using-cloudformer.html It will upload the template on S3 and you can reuse it or some of its parts.
Since you are using AWS Landing Zone, you can add the security group to the aws_baseline templates, either as a new template or added to one of the existing files. When submitted, AWS Landing Zone uses Step Functions and AWS Stack Sets to deploy your SG to all existing and future accounts. If you choose to create the security group in a new file, you must add a reference to it in the manifest.yaml file (compare with how the other templates are referenced).
I was able to do this via the Account Vending Machine. But AGL's Stacksets should be a good alternative too.
To copy AWS Security Gp from one account of any region to other AWS account to any region is required lots of scripting(coding) in aws cli or boto3.
But one thing i done which is feasible to my usecase(Whitelist 14 IPs for HTTPS) is write a bash script
Here prior i create a blank SG on other AWS account(or u may use aws cli to create that too),
`
Region1=
SGFromCopy=
profilefromcopy=
Region2=
SGToCopy=
profiletocopy=
for IP in $(aws ec2 describe-security-groups --region $Region1 --group-id=$SGFromCopy --profile $profilefromcopy --query SecurityGroups[].IpPermissions[].IpRanges[].CidrIp --output text) ;
do
aws ec2 authorize-security-group-ingress --group-id=$SGToCopy --ip-permissions IpProtocol=TCP,FromPort=443,ToPort=443,IpRanges=[{CidrIp=$IP}] --profile $profiletocopy --region $Region2;
done ;
`
U may modify script if u have csv formated of SG and then just had iterated in while loop
BONUS
To get desire output you have to alter output in file or some where else
> aws ec2 describe-security-groups --region $Region1 --group-id $SGFromCopy --profile $profilefromcopy --query SecurityGroups[].IpPermissions[] --output text
we have a policy we are attaching to roles that's ensuring the ec2 provisioner has included the required tags defined by our finance department. sample here
I can picture an engineer getting frustrated when each time he tries to spin up an EC2 instance it's immediately shut down because he forgot to include required tags and hit a DENY in the iam policy, but he has no way of knowing.
I was hoping for a custom error description return by the api. It doesn't have to be iam, if there's a benefit to instead use lambda fired off the cloudwatch runinstances event, I'm open to that as well.
What can we do to inform the engineer his instance was shutdown due to missing required tags?
Would love to hear your suggestions!
AWS offers a base set of APIs. It's impossible to provide every feature that all users want, but using the base APIs anyone can build a service on top of AWS.
For example, instead of having your developers launch instances directly through AWS, you could have them use a custom interface (perhaps a page on an Intranet) where they can request certain services. This interface can then call AWS APIs on their behalf, including required elements, such as tags. It's like storage -- people don't write directly to the disk, they do it through their operating system.
If that's too low-level for you, an alternative is to use AWS CloudFormation, which launches services based upon a template. The template can collect the required information or automatically add it to instances when they are launched.
Then, throw in AWS Service Catalog and you can force users to launch services through CloudFormation templates. Service Catalog offers a list of services (effectively just CloudFormation templates) that users can launch -- even if the users don't have permission to launch the services themselves!
For example, let's say your developers do not have permission to create a an Amazon EC2 instance. You could provide a template via Service Catalog that launches an EC2 instance on their behalf but also enforces your standards, such as tagging, subnets, security groups, etc.
Bottom line: If you don't see something in AWS that specifically meets your needs, you can often build it on top of AWS either via your own code or via AWS Service Catalog.
tl;dr The access denied message does include the condition it failed on.
The sample linked to provides an IAM policy to deny RunInstances if tags were not also included in the RunInstances api call. Resource level permissions were provided in March of 2017 allowing users to include tags in the RunInstances api call as well as allowing IAM to enforce ec2 resource level permissions, in this case, enforcing users to include required tags.
Prior to March 2017, two api calls were required to create tags:
ec2 run-instances --image-id ami-6df1e514 --count 1 --instance-type t2.micro --subnet-id subnet-e25e29bb
ec2-create-tags <instanceid> --tag "Name=<value>" --tag "App=<value>" --tag "AppOwner=<value>" --tag "Environment=<value>"
After implementing this iam policy the workflow above would DENY on step 1.
Here’s the new workflow for provisioning an EC2 instance which includes tags:
ec2 run-instances --image-id ami-6df1e514 --count 1 --instance-type t2.micro --subnet-id subnet-e25e29bb --tag-specifications 'ResourceType=instance,Tags=[{Key=name,Value=required_tag_name_value},{Key=App,Value=required_tag_app_value},{Key=AppOwner,Value=required_tag_appowner_value},{Key=Environment,Value=required_tag_env_value}]'
Based on the sample iam policy linked to, if the user does not include the required tags, the returned error message is encoded and is displayed to the user as such:
An error occurred (UnauthorizedOperation) when calling the
RunInstances operation: You are not authorized to perform this
operation. Encoded authorization failure message:
zGetZzIIedikZSAbE4YGEGhy1ytjrXD8Ak-hr1UJvDkKW7wzDu27ZS0NfMGaOUBQGO1I3b3v6Us8BXO-41973SckcmEH17019Sheua16dmrTPYHYymw9pftYope_jmR6MgsvH1bMP0FE_gHnEvaJCIMNukOo-utK....
If the user's iam policy also includes the sts:DecodeAuthorizationMessage, they can decode the message with the following:
aws sts decode-authorization-message --encoded-message <encoded message here>
{
"DecodedMessage": "{\"allowed\":false,\"explicitDeny\":true,\"matchedStatements\":{\"items\":[{\"statementId\":\"\",\"effect\":\"DENY\",\"principals\":{\"items\":[{\"value\":\"AROAJVNFHTEF6I2STOU\"}]},\"principalGroups\":{\"items\":[]},\"actions\":{\"items\":[{\"value\":\"ec2:RunInstances\"}]},\"resources\":{\"items\":[{\"value\":\"arn:aws:ec2:::instance/\"}]},\"conditions\":{\"items\":[{\"key\":\"aws:RequestTag/AppOwner\",\"values\":{\"items\":[{\"value\":\"true\"}]}}]}}]},\"failures\":{\"items\":[]},\"context\":{\"principal\":{\"id\":\"AROAJVNFHTEF6I2STOU-CLI-session-1501883988\",\"arn\":\"arn:aws:sts:::assumed-role/_test_require_tags/AWS-CLI-session-1501883988\"},\"action\":\"ec2:RunInstances\",\"resource\":\"arn:aws:ec2:us-west-2::instance/\",\"conditions\":{\"items\":[{\"key\":\"ec2:Tenancy\",\"values\":{\"items\":[{\"value\":\"default\"}]}},{\"key\":\"ec2:AvailabilityZone\",\"values\":{\"items\":[{\"value\":\"us-west-2c\"}]}},{\"key\":\"ec2:Region\",\"values\":{\"items\":[{\"value\":\"us-west-2\"}]}},{\"key\":\"ec2:ebsOptimized\",\"values\":{\"items\":[{\"value\":\"false\"}]}},{\"key\":\"ec2:InstanceType\",\"values\":{\"items\":[{\"value\":\"t2.micro\"}]}},{\"key\":\"ec2:RootDeviceType\",\"values\":{\"items\":[{\"value\":\"ebs\"}]}}]}}}"
}
While a little difficult to read, we can see which condition the RunInstance calls failed on:
aws:RequestTag/AppOwner\",\"values\":{\"items\":[{\"value\":\"true\"}]}}]}}]},\"failures\":
I am trying to attach an IAM role to multiple EC2 instances based on tags. Is there a module already available which I can use. I have been searching for a bit but couldn't find anything specific.
Attaching an IAM role to existing EC2 instances is a relatively new feature (announced in Feb 2017). There is no support for that in Ansible currently. If you AWS CLI 1.11.46 or higher installed, then you can use shell module to invoke the AWS CLI and achieve desired result.
See: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
I submitted a PR last year to add 2 AWS modules : boto3 and boto3_wait.
These 2 modules allow you to interact with AWS API using boto3.
For instance, you could attach a role to an existing EC2 instance by calling associate_iam_instance_profile method on EC2 service :
- name: Attach role MyRole
boto3:
service: ec2
region: us-east-1
operation: associate_iam_instance_profile
parameters:
IamInstanceProfile:
Name: MyRole
InstanceId: i-xxxxxxxxxx
Feel free to give the PR a thumbs-up if you like it! ;)
In addition to this, you can use AWS dynamic inventory to target instances by tag.