EKS Error: You must be logged in to the server - amazon-web-services

I created EKS cluster via terraform:
resource "aws_eks_cluster" "eks-cluster" {
name = "tf-example"
role_arn = aws_iam_role.eks_role.arn
vpc_config {
subnet_ids = var.subnet_ids
}
depends_on = [
aws_iam_role_policy_attachment.eks-cluster-policy,
aws_iam_role_policy_attachment.eks-cluster-security-group-policy
]
}
resource "aws_eks_node_group" "eks-node-group" {
cluster_name = aws_eks_cluster.eks-cluster.name
instance_types = var.instance_types
node_group_name = "tf-example"
node_role_arn = aws_iam_role.eks-node-group.arn
subnet_ids = var.subnet_ids
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
update_config {
max_unavailable = 1
}
depends_on = [
aws_iam_role_policy_attachment.eks-node-group-worker-node-policy,
aws_iam_role_policy_attachment.eks-node-group-cni-policy,
aws_iam_role_policy_attachment.eks-node-group-registry-read-only-policy
]
}
The IAM roles and policies look like
resource "aws_iam_role" "eks_role" {
name = "tf-${var.stack_name}-eks-cluster-role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com",
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role" "eks-node-group" {
name = "tf-${var.stack_name}-eks-node-group-role"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"eks.amazonaws.com",
"ec2.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
When I run the aws eks update-kubeconfig --name cluster_name --region region_name
I get
error: You must be logged in to the server (Unauthorized)
I'm new to AWS and used to GCP. What policies does my user need? What role do I need to even run any kubectl command?

Related

Attaching a PERSISTENT_2 FSX to an AWS Batch Compute instance using Terraform

I have Terraform code that almost successfully builds an AWS Batch Compute Environment with an Fsx file share mount to it.
However, despite passing the aws_fsx_lustre_file_system module a deployment type of PERSISTENT_2:
resource "aws_fsx_lustre_file_system" "storage" {
storage_capacity = 1200
subnet_ids = [var.subnet_id]
deployment_type = "PERSISTENT_2"
per_unit_storage_throughput = 250
}
the Fsx is only spun up at a scratch drive (viewable via AWS management console).
What additional information can I post here to help debug why this Terraform code is not respecting the deployment_type parameter?
Full code:
// ==========================================================
// Module input variables
// ----------------------------------------------------------
variable "region" {
type = string
}
variable "compute_environment_name" {
type = string
}
variable "job_queue_name" {
type = string
}
variable "max_vcpus" {
type = number
}
variable "vpc_id" {
type = string
}
variable "subnet_id" {
type = string
}
variable "security_group_id" {
type = string
}
variable "mounted_storage_bucket" {
type = string
}
// ==========================================================
// Components for batch processing for AWS Batch
// ----------------------------------------------------------
resource "aws_iam_role" "batch_role" {
name = "batch_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement":
[
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "batch.amazonaws.com"
}
},
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
},
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ecs.amazonaws.com"
}
},
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
}
}
]
}
EOF
tags = {
created-by = "Terraform"
}
}
# Attach the Batch policy to the Batch role
resource "aws_iam_role_policy_attachment" "batch_service_role" {
role = aws_iam_role.batch_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBatchServiceRole"
}
resource "aws_iam_role_policy_attachment" "elastic_container_service_role" {
role = aws_iam_role.batch_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
# Security Group for batch processing
resource "aws_security_group" "batch_security_group" {
name = "batch_security_group"
description = "AWS Batch Security Group for batch jobs"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
created-by = "Terraform"
}
}
# IAM Role for underlying EC2 instances
resource "aws_iam_role" "ec2_role" {
name = "ec2_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
tags = {
created-by = "Terraform"
}
}
# Assign the EC2 role to the EC2 profile
resource "aws_iam_instance_profile" "ec2_profile" {
name = "ec2_profile"
role = aws_iam_role.ec2_role.name
}
# Attach the EC2 container service policy to the EC2 role
resource "aws_iam_role_policy_attachment" "ec2_policy_attachment" {
role = aws_iam_role.ec2_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
# IAM Role for jobs
resource "aws_iam_role" "job_role" {
name = "job_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement":
[
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
}
}
]
}
EOF
tags = {
created-by = "Terraform"
}
}
# S3 read/write policy
resource "aws_iam_policy" "s3_policy" {
name = "s3_policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::${var.mounted_storage_bucket}",
"arn:aws:s3:::${var.mounted_storage_bucket}/*"
]
}
]
}
EOF
}
resource "aws_iam_policy" "ecs_policy" {
name = "ecs_policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:*"
],
"Resource": [
"*"
]
}
]
}
EOF
}
# Attach the policy to the job role
resource "aws_iam_role_policy_attachment" "job_policy_attachment_s3" {
role = aws_iam_role.job_role.name
policy_arn = aws_iam_policy.s3_policy.arn
}
resource "aws_iam_role_policy_attachment" "job_policy_attachment_ecs" {
role = aws_iam_role.job_role.name
policy_arn = aws_iam_policy.ecs_policy.arn
}
resource "aws_fsx_lustre_file_system" "storage" {
storage_capacity = 1200
subnet_ids = [var.subnet_id]
deployment_type = "PERSISTENT_2"
per_unit_storage_throughput = 250
}
resource "aws_fsx_data_repository_association" "storage_association" {
file_system_id = aws_fsx_lustre_file_system.storage.id
data_repository_path = "s3://${var.mounted_storage_bucket}"
file_system_path = "/data/fsx"
s3 {
auto_export_policy {
events = ["NEW", "CHANGED", "DELETED"]
}
auto_import_policy {
events = ["NEW", "CHANGED", "DELETED"]
}
}
}
resource "aws_launch_template" "launch_template" {
name = "launch_template"
update_default_version = true
user_data = base64encode(<<EOF
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"
runcmd:
- region=${var.region}
- amazon-linux-extras install -y lustre2.10
- mkdir -p /data/fsx
- mount -t lustre ${aws_fsx_lustre_file_system.storage.dns_name}#tcp:fsx" /data/fsx
--==MYBOUNDARY==--
EOF
)
}
// ==========================================================
// Batch setup
// - compute environment
// - job queue
// ----------------------------------------------------------
resource "aws_iam_role" "ecs_instance_role" {
name = "ecs_instance_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "ecs_instance_role" {
role = "${aws_iam_role.ecs_instance_role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
resource "aws_iam_instance_profile" "ecs_instance_role" {
name = "ecs_instance_role"
role = "${aws_iam_role.ecs_instance_role.name}"
}
resource "aws_iam_role" "aws_batch_service_role" {
name = "aws_batch_service_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "batch.amazonaws.com"
}
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "aws_batch_service_role" {
role = "${aws_iam_role.aws_batch_service_role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBatchServiceRole"
}
resource "aws_batch_compute_environment" "batch_environment" {
compute_environment_name = var.compute_environment_name
compute_resources {
instance_role = "${aws_iam_instance_profile.ecs_instance_role.arn}"
launch_template {
launch_template_name = aws_launch_template.launch_template.name
version = "$Latest"
}
instance_type = [
"c6g.large",
"c6g.xlarge",
"c6g.2xlarge",
"c6g.4xlarge",
"c6g.8xlarge",
"c6g.12xlarge"
]
max_vcpus = 16
min_vcpus = 0
security_group_ids = [
aws_security_group.batch_security_group.id,
]
subnets = [
var.subnet_id
]
type = "EC2"
}
service_role = "${aws_iam_role.aws_batch_service_role.arn}"
type = "MANAGED"
depends_on = [aws_iam_role_policy_attachment.aws_batch_service_role]
tags = {
created-by = "Terraform"
}
}
resource "aws_batch_job_queue" "job_queue" {
name = "job_queue"
state = "ENABLED"
priority = 1
compute_environments = [
aws_batch_compute_environment.batch_environment.arn
]
depends_on = [aws_batch_compute_environment.batch_environment]
tags = {
created-by = "Terraform"
}
}
output "batch_compute_environment_id" {
value = aws_batch_compute_environment.batch_environment.id
}
output "batch_job_queue_id" {
value = aws_batch_job_queue.job_queue.id
}
output "batch_storage_mount_target" {
value = aws_fsx_lustre_file_system.storage.arn
}
output "batch_storage_mount_target_mount" {
value = aws_fsx_lustre_file_system.storage.mount_name
}

Nodes are created with the same name and does not join to the EKS cluster via Terraform

I checked all similar questions on stackoverflow but I couldn't find any decent answer for this issue. So main problem is when I applied my terraform. The instances up and run successfully and I can see the node group under EKS but I can't see any nodes under my EKS cluster. I followed this article aws article I applied below steps but didn't work. The article also mentions about aws-auth and userdata. Should I add also these things and how? (do I need userdata I already added optimized ami?)
In summary my main problems:
my instances running with same name
my instances does not join the EKS cluster
Applied steps via aws article:
I added aws optimized ami but it doesn't
solve my problem:
/aws/service/eks/optimized-ami/1.22/amazon-linux-2/recommended/image_id (New update during installation of node group its failing because of this image probably not suitable for t2.micro)
I set below parameter for vpc what article say
enable_dns_support = true
enable_dns_hostnames = true
I set the tags for my worker nodes
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "owned"
I specified userdata line in launch template. Below you can see my userdata.sh file that Im calling that from launch template
There are no nodes :(
node_grp.tf :Here my EKS worker node terraform file with policies.
resource "aws_iam_role" "eks_nodes" {
name = "eks-node-group"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy" "node_autoscaling" {
name = "${var.base_name}-node_autoscaling_policy"
role = aws_iam_role.eks_nodes.name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:DescribeTags"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_nodes.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_nodes.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_nodes.name
}
resource "aws_eks_node_group" "node" {
cluster_name = var.cluster_name
node_group_name = "${var.base_name}-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = var.private_subnet_ids
scaling_config {
desired_size = var.desired_nodes
max_size = var.max_nodes
min_size = var.min_nodes
}
launch_template {
name = aws_launch_template.node_group_template.name
version = aws_launch_template.node_group_template.latest_version
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
]
}
resource "aws_launch_template" "node_group_template" {
name = "${var.cluster_name}_node_group"
instance_type = var.instance_type
user_data = base64encode(templatefile("${path.module}/userdata.sh", { API_SERVER_URL = var.cluster_endpoint, B64_CLUSTER_CA = var.ca_certificate, CLUSTER_NAME = var.cluster_name } ))
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = var.disk_size
}
}
tag_specifications {
resource_type = "instance"
tags = {
"Instance Name" = "${var.cluster_name}-node"
Name = "${var.cluster_name}-node"
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "owned"
}
}
}
cluster.tf : my main eks cluster file
resource "aws_iam_role" "eks_cluster" {
name = var.cluster_name
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.eks_cluster.name
}
resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = aws_iam_role.eks_cluster.arn
enabled_cluster_log_types = ["api", "audit", "authenticator","controllerManager","scheduler"]
vpc_config {
security_group_ids = [var.security_group_id]
subnet_ids = flatten([ var.private_subnet_ids, var.public_subnet_ids ])
endpoint_private_access = false
endpoint_public_access = true
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.AmazonEKSServicePolicy
]
}
resource "aws_iam_openid_connect_provider" "oidc_provider" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = var.trusted_ca_thumbprints
url = aws_eks_cluster.eks_cluster.identity[0].oidc[0].issuer
}
user-data.sh : My userdata sh file calling from launch template
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ${CLUSTER_NAME} --b64-cluster-ca ${B64_CLUSTER_CA} --apiserver-endpoint ${API_SERVER_URL}
--==MYBOUNDARY==--\

Error: Error creating IAM Role s3_access: MalformedPolicyDocument

I am having this error when I run the terraform Error: Error creating IAM Role s3_access: MalformedPolicyDocument: Has prohibited field Resource status code: 400, What am I missing in IAM role? I am using this role to fetch a certain file from s3. I want to give limited permission to this role like only fetch certain bucket contents
resource "aws_instance" "web" {
count = var.ec2_count
ami = var.ami_id
instance_type = var.instance_type
subnet_id = var.subnet_id
key_name = var.key_name
source_dest_check = false
associate_public_ip_address = true
#user_data = "${file("userdata.sh")}"1
security_groups = [aws_security_group.ec2_sg.id]
user_data = "${file("${path.module}/template/userdata.sh")}"
tags = {
Name = "Webserver"
}
}
resource "aws_iam_role" "s3_access" {
name = "s3_access"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": [
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject",
"s3:GetBucketVersioning",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::webserver/*",
"arn:aws:s3:::webserver"
]
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_security_group" "ec2_sg" {
name = "ec2-sg"
description = "Allow TLS inbound traffic"
vpc_id = var.vpc_id
ingress {
description = "incoming for ec2-instance"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "ec2-sg"
}
}
Any type of help would be appreciated. I have tried doing it myself but I am stucked.
resource "aws_iam_role" "s3_access" {
name = "s3_access"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
tags = {
tag-key = "tag-value"
}
}
resource "aws_iam_role_policy" "s3_access_policy" {
name = "s3_access_policy"
role = "${aws_iam_role.s3_access.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "2",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObjectVersion",
"s3:GetObject",
"s3:GetBucketVersioning",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::webserver/*",
"arn:aws:s3:::webserver"
]
}
]
}
EOF
}

Terraform service role for creating spot instances

I am new bie to terraform and I am trying to create a service role for creating a spot instances, Please let me know what is the Service name i should use for spot instances? Does Service: "ec2.amazonaws.com" help to create spot instances?
I also noticed that in aws console, we have an option to select an use case for ec2 spot instances. Does terraform also have an option to select the use case?
Terraform version : Terraform v0.11.0
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {"Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
}
What you have is part of the steps to create an Instance Profile for an EC2 instance to assume an IAM role( step 3 below).
Create an IAM policy for the Role.
Create the IAM role and attach the policy.
Give the EC2 instance permission to assume the role.
resource "aws_iam_role_policy" "test_policy" {
name = "test_policy"
role = "${aws_iam_role.test_role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:Describe*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role" "test_role" {
name = "test_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_spot_fleet_request" "cheap_compute" {
iam_fleet_role = "arn:aws:iam::12345678:role/spot-fleet"
spot_price = "0.03"
allocation_strategy = "diversified"
target_capacity = 6
valid_until = "2019-11-04T20:44:20Z"
launch_specification {
instance_type = "m4.10xlarge"
ami = "ami-1234"
spot_price = "2.793"
placement_tenancy = "dedicated"
}
launch_specification {
instance_type = "m4.4xlarge"
iam_instance_profile = "${aws_iam_role.test_role.name}"
ami = "ami-5678"
key_name = "my-key"
spot_price = "1.117"
availability_zone = "us-west-1a"
subnet_id = "subnet-1234"
weighted_capacity = 35
root_block_device {
volume_size = "300"
volume_type = "gp2"
}
tags {
Name = "spot-fleet-example"
}
}
}
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
https://www.terraform.io/docs/providers/aws/r/instance.html#iam_instance_profile
https://www.terraform.io/docs/providers/aws/r/iam_role_policy.html
https://www.terraform.io/docs/providers/aws/r/spot_fleet_request.html

Terraform: ECS service - InvalidParameterException

I am trying to provision an ECS cluster with terraform, everything seems to work well up until I am creating the ecs service:
resource "aws_ecs_service" "ecs-service" {
name = "ecs-service"
iam_role = "${aws_iam_role.ecs-service-role.name}"
cluster = "${aws_ecs_cluster.ecs-cluster.id}"
task_definition = "${aws_ecs_task_definition.my_cluster.family}"
desired_count = 1
load_balancer {
target_group_arn = "${aws_alb_target_group.ecs-target-group.arn}"
container_port = 80
container_name = "my_cluster"
}
}
and the IAM role is:
resource "aws_iam_role" "ecs-service-role" {
name = "ecs-service-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "ecs-service-role-attachment" {
role = "${aws_iam_role.ecs-service-role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole"
}
I am getting the following error message:
aws_ecs_service.ecs-service: 1 error(s) occurred:
aws_ecs_service.ecs-service: InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify
that the ECS service role being passed has the proper permissions.
In assume_role_policy, can you change the "Principal" line to as mentioned below: You are having ec2.amazonaws.com.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}