I have created an RDS Proxy using Terraform. However, it does not seem to be working.
My application code cannot connect to the proxy (timeout) and aws rds describe-db-proxy-targets gives the following:
{
"Targets": [
{
"Endpoint": "mydb.aaaaaaaaaaaa.eu-west-2.rds.amazonaws.com",
"RdsResourceId": "mydb",
"Port": 5432,
"Type": "RDS_INSTANCE",
"TargetHealth": {
"State": "UNAVAILABLE",
"Description": "DBProxy Target unavailable due to an internal error"
}
}
]
}
How can I go about debugging this?
Here is the Terraform script for the proxy. The RDS instance is described elsewhere, but is working.
data "aws_subnet" "mydb_rds" {
filter {
name = "availability-zone"
values = [ aws_db_instance.mydb.availability_zone ]
}
}
resource "aws_secretsmanager_secret" "mydb_rds_proxy" {
name = "mydb-rds-proxy"
}
resource "aws_secretsmanager_secret_version" "mydb_rds_proxy" {
secret_id = aws_secretsmanager_secret.mydb_rds_proxy.id
secret_string = var.db_password
}
resource "aws_iam_role" "mydb_rds_proxy" {
name = "mydb-rds-proxy"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "rds.amazonaws.com"
}
}
]
}
EOF
}
resource "aws_iam_policy" "mydb_rds_proxy_policy" {
name = "mydb-rds-proxy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GetSecretValue",
"Action": [
"secretsmanager:GetSecretValue"
],
"Effect": "Allow",
"Resource": [
"${aws_secretsmanager_secret.mydb_rds_proxy.arn}"
]
},
{
"Sid": "DecryptSecretValue",
"Action": [
"kms:Decrypt"
],
"Effect": "Allow",
"Resource": [
"${aws_secretsmanager_secret.mydb_rds_proxy.arn}"
]
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "mydb_rds_proxy_policy_attachment" {
role = aws_iam_role.mydb_rds_proxy.name
policy_arn = aws_iam_policy.mydb_rds_proxy_policy.arn
}
resource "aws_db_proxy" "mydb" {
name = "mydb-rds-proxy"
debug_logging = false
engine_family = "POSTGRESQL"
idle_client_timeout = 1800
require_tls = true
role_arn = aws_iam_role.mydb_rds_proxy.arn
vpc_security_group_ids = [ aws_security_group.mydb_rds.id ]
vpc_subnet_ids = [
data.aws_subnet.mydb_rds.id,
aws_default_subnet.subnet_a.id,
aws_default_subnet.subnet_b.id
]
auth {
auth_scheme = "SECRETS"
iam_auth = "DISABLED"
secret_arn = aws_secretsmanager_secret.mydb_rds_proxy.arn
}
}
resource "aws_db_proxy_default_target_group" "mydb" {
db_proxy_name = aws_db_proxy.mydb.name
connection_pool_config {
connection_borrow_timeout = 120
max_connections_percent = 100
max_idle_connections_percent = 50
}
}
resource "aws_db_proxy_target" "mydb" {
db_instance_identifier = aws_db_instance.mydb.id
db_proxy_name = aws_db_proxy.mydb.name
target_group_name = aws_db_proxy_default_target_group.mydb.name
}
locals {
proxied_pg_connection_string = "postgres://${aws_db_instance.mydb.username}:${var.db_password}#${aws_db_proxy.mydb.endpoint}:5432/postgres?client_encoding=UTF8"
}
There are several things you need to get right for this to work:
Username / password stored in the secret
Security group rules from Lambda -> RDS Proxy
Security group rules from RDS Proxy -> RDS
RDS Proxy, Lambda and RDS in the same VPC
RDS Proxy role can access the secret
A useful debugging query is:
aws rds describe-db-proxy-targets --db-proxy-name <proxy-name>
To understand the error message it gives back, see this page.
The username / password is the hardest thing to discover, since Terraform does not support it yet. What you need to do is construct a JSON string in Terraform that matches what RDS Proxy can understand:
resource "aws_secretsmanager_secret_version" "my_db_proxy" {
secret_id = aws_secretsmanager_secret.my_db_proxy.id
secret_string = jsonencode({
"username" = aws_db_instance.my_db.username
"password" = var.db_password
"engine" = "postgres"
"host" = aws_db_instance.my_db.address
"port" = 5432
"dbInstanceIdentifier" = aws_db_instance.my_db.id
})
}
You then need ensure these security group rules allowing TCP traffic on port 5432 (for Postgres) exist:
ingress Lambda to RDS Proxy
ingress RDS Proxy to RDS
egress RDS Proxy to "0.0.0.0/0"
The RDS Proxy role should have a policy like this:
resource "aws_iam_policy" "my_rds_proxy_policy" {
name = "my-rds-proxy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Action": [
"rds:*"
],
"Effect": "Allow",
"Resource": [
"${aws_db_instance.my_db.arn}"
]
},
{
"Sid": "GetSecretValue",
"Action": [
"secretsmanager:GetSecretValue"
],
"Effect": "Allow",
"Resource": [
"${aws_secretsmanager_secret.my_rds_proxy.arn}"
]
},
{
"Sid": "DecryptSecretValue",
"Action": [
"kms:Decrypt"
],
"Effect": "Allow",
"Resource": [
"*"
]
},
{
"Sid": "DecryptKms",
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:ViaService": "secretsmanager.${var.aws_region}.amazonaws.com"
}
}
}
]
}
EOF
}
Good luck!
Related
I reference the code at https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_instance_profile, I created iam.tf file. I tried to attach the policy to an ec2 instance. I got an error:
aws_iam_role.role: Creating...
Error: failed creating IAM Role (jenkins_server_role): MalformedPolicyDocument: Has prohibited field Resource
status code: 400, request id: c2b8db57-357f-4657-a692-a3e6026a6b7b
with aws_iam_role.role,
on iam.tf line 6, in resource "aws_iam_role" "role":
6: resource "aws_iam_role" "role" Releasing state lock. This may take a few moments...
ERRO[0011] Terraform invocation failed in /home/pluo/works/infra/jenkins
ERRO[0011] 1 error occurred:
* exit status 1
Here is the iam.tf:
resource "aws_iam_instance_profile" "jenkins_server" {
name = "jenkins_server"
role = aws_iam_role.role.name
}
resource "aws_iam_role" "role" {
name = "jenkins_server_role"
path = "/"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "iam:CreateServiceLinkedRole",
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:AWSServiceName": [
"autoscaling.amazonaws.com",
"ec2scheduled.amazonaws.com",
"elasticloadbalancing.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"transitgateway.amazonaws.com"
]
}
}
}
]
}
EOF
}
Here is the module to create ec2 instance.
module "ec2" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "4.1.4"
name = var.ec2_name
ami = var.ami
instance_type = var.instance_type
availability_zone = var.availability_zone
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnets[0]
vpc_security_group_ids = [aws_security_group.WebServerSG.id]
associate_public_ip_address = true
key_name = var.key_name
monitoring = true
iam_instance_profile = aws_iam_instance_profile.jenkins_server.name
enable_volume_tags = false
root_block_device = [
{
encrypted = true
volume_type = "gp3"
throughput = 200
volume_size = 100
tags = {
Name = "jenkins_server"
}
},
]
tags = {
Name = "WebServerSG"
}
}
Your assume_role_policy is incorrect. For ec2 instances it should be:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "ec2.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
Then you current assume_role_policy should be written in aws_iam_role_policy instead.
Just to provide more information on top of the good answer from Marcin. In AWS IAM roles there are two kinds of policies you might specify (which at first might be confusing):
assume role policy - i.e. "who"/what can assume this role, as Marcin mentions here you would like to specify that EC2 instances can assume this role - i.e. act in this role
role policy - i.e. what can this role do; in your case it will be all these elasticloadbalancing, cloudwatch, etc.
So, putting that into Terraform perspective:
the former should go to assume_role_policy of a aws_iam_role
the latter should go to a separate resource "iam_role_policy"
I created an AWS step function using Terraform. For now, the step function has only 1 lambda function for now:
resource "aws_iam_role_policy" "sfn_policy" {
policy = jsonencode(
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
},
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:InvokeAsync"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [ "states:StartExecution" ],
"Resource": "*"
}
]
}
)
role = aws_iam_role.processing_lambda_role.id
}
resource "aws_sfn_state_machine" "sfn_state_machine_zip_files" {
name = local.zip_files_step_function_name
role_arn = aws_iam_role.processing_lambda_role.arn
definition = <<EOF
{
"Comment": "Process Incoming Zip Files",
"StartAt": "ProcessIncomingZipFiles",
"States": {
"ProcessIncomingZipFiles": {
"Type": "Task",
"Resource": "${aws_lambda_function.process_zip_files_lambda.arn}",
"ResultPath": "$.Output",
"End": true
}
}
}
EOF
}
This is how the role is initially defined:
resource "aws_iam_role" "processing_lambda_role" {
name = local.name
path = "/service-role/"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = { Service = "lambda.amazonaws.com" }
Action = "sts:AssumeRole"
}
]
})
}
Why do I get this error message even though the policies include the AssumeRole policy already. I also tried removing one of the sts:AssumeRolepolicies but the error was still there.
"Neither the global service principal states.amazonaws.com, nor the regional one is authorized to assume the provided role."
AWS docs Reference: https://aws.amazon.com/premiumsupport/knowledge-center/step-functions-iam-role-troubleshooting/
The role aws_iam_role.processing_lambda_role can be only assumed by a lambda function. So, your aws_sfn_state_machine.sfn_state_machine_zip_files can't assume this role. You have to change the Principal in the role from:
Principal = { Service = "lambda.amazonaws.com" }
into
Principal = { Service = "states.amazonaws.com" }
You still may have other issues, depending on what you want to do exactly. But your reported error is due to what I mentioned.
I'm trying to figure out a "one command" approach to spin up spot instances with terraform.
I've managed to get this far:
resource "aws_iam_role" "spot_role" {
name_prefix = "spot_role_"
assume_role_policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "spotfleet.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy" "spot_policy" {
name_prefix = "spot_policy_"
description = "EC2 Spot Fleet Policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeImages",
"ec2:DescribeSubnets",
"ec2:RequestSpotInstances",
"ec2:TerminateInstances",
"ec2:DescribeInstanceStatus",
"ec2:CreateTags",
"ec2:RunInstances"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Condition": {
"StringEquals": {
"iam:PassedToService": [
"ec2.amazonaws.com",
"ec2.amazonaws.com.cn"
]
}
},
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:RegisterInstancesWithLoadBalancer"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:loadbalancer/*"
]
},
{
"Effect": "Allow",
"Action": [
"elasticloadbalancing:RegisterTargets"
],
"Resource": [
"arn:aws:elasticloadbalancing:*:*:*/*"
]
}
]
}
EOF
}
locals {
role = aws_iam_role.spot_role
zone = join("", [var.region, "a"])
}
// -----------------------------------------------------------------------------
resource "aws_iam_role_policy_attachment" "spot_role_policy_attachment" {
role = local.role.name
policy_arn = aws_iam_policy.spot_policy.arn
}
// -----------------------------------------------------------------------------
resource "aws_security_group" "spot_security_group" {
name = "allow_ssh"
description = "Allow SSH inbound traffic"
vpc_id = aws_vpc.spot_vpc.id
ingress {
description = "SSH from anywhere"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_ssh"
}
}
resource "aws_vpc" "spot_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "spot_vpc"
}
}
resource "aws_subnet" "spot_subnet" {
vpc_id = aws_vpc.spot_vpc.id
cidr_block = "10.0.0.0/24"
map_public_ip_on_launch = true
availability_zone = local.zone
tags = {
Name = "spot_subnet"
}
}
resource "aws_spot_fleet_request" "spot_fleet_request" {
iam_fleet_role = local.role.arn
spot_price = "0.022"
target_capacity = 1
launch_specification {
instance_type = "r6g.medium"
ami = var.ami
key_name = var.key_name
subnet_id = aws_subnet.spot_subnet.id
availability_zone = local.zone
}
launch_specification {
instance_type = "a1.large"
ami = var.ami
key_name = var.key_name
subnet_id = aws_subnet.spot_subnet.id
availability_zone = local.zone
}
}
// -----------------------------------------------------------------------------
output "spot_fleet_request_id" {
value = aws_spot_fleet_request.spot_fleet_request.id
}
But I get the following error in the Spot Requests -> Details view:
spotFleetRequestConfigurationInvalid
r6g.medium, ami-0c582118883b46f4f, Linux/UNIX (Amazon VPC): The provided credentials do not have permission to create the service-linked role for EC2 Spot Instances.
EDIT
aws iam create-service-linked-role --aws-service-name spot.amazonaws.com
An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation: Service role name AWSServiceRoleForEC2Spot has been taken in this account, please try a different suffix.
arn:aws:iam::20XXXXXXX22:role/aws-service-role/spot.amazonaws.com/AWSServiceRoleForEC2Spot has the following AWSEC2SpotServiceRolePolicy attached
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:StartInstances",
"ec2:StopInstances",
"ec2:RunInstances"
],
"Resource": [
"*"
]
},
{
"Effect": "Deny",
"Action": [
"ec2:RunInstances"
],
"Resource": [
"arn:aws:ec2:*:*:instance/*"
],
"Condition": {
"StringNotEquals": {
"ec2:InstanceMarketType": "spot"
}
}
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"iam:PassedToService": [
"ec2.amazonaws.com",
"ec2.amazonaws.com.cn"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"ec2:CreateAction": "RunInstances"
}
}
}
]
}
I'm trying to create a pipeline to deploy on Beanstalk but I constantly get an error in the deploy section of the pipeline:
Insufficient permissions
The provided role does not have sufficient permissions to access
Elastic Beanstalk: Access Denied
What am I missing?
/************************************************
* Code Build
***********************************************/
resource "aws_codebuild_project" "project-name-codebuild" {
name = "${var.project}-codebuild"
build_timeout = "15"
service_role = "${aws_iam_role.project-name-codebuild-role.arn}"
artifacts {
type = "CODEPIPELINE"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
type = "LINUX_CONTAINER"
image = "aws/codebuild/java:openjdk-8"
}
source {
type = "CODEPIPELINE"
}
tags {
Name = "${var.project}"
Environment = "${var.environment}"
}
}
resource "aws_ecr_repository" "project-name-ecr-repository" {
name = "${var.project}-ecr-repository"
}
resource "aws_iam_role" "project-name-codebuild-role" {
name = "${var.project}-codebuild-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codebuild.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "project-name-codebuild-role-policy" {
role = "${aws_iam_role.project-name-codebuild-role.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "project-name-codebuild-role-policy-bucket" {
policy_arn = "${aws_iam_policy.project-name-code-pipeline-bucket-access.arn}"
role = "${aws_iam_role.project-name-codebuild-role.name}"
}
/************************************************
* Code Pipeline
***********************************************/
resource "aws_codepipeline" "project-name-code-pipeline" {
name = "${var.project}-code-pipeline"
role_arn = "${aws_iam_role.project-name-code-pipeline-role.arn}"
artifact_store {
location = "${aws_s3_bucket.project-name-code-pipeline-bucket.bucket}"
type = "S3"
}
stage {
name = "Source"
action {
name = "Source"
category = "Source"
owner = "ThirdParty"
provider = "GitHub"
version = "1"
output_artifacts = [
"source"]
configuration {
Owner = "Owner"
Repo = "project-name"
Branch = "master"
OAuthToken = "${var.github-token}"
}
}
}
stage {
name = "Build-Everything"
action {
name = "Build"
category = "Build"
owner = "AWS"
provider = "CodeBuild"
input_artifacts = [
"source"]
output_artifacts = [
"build"]
version = "1"
configuration {
ProjectName = "${aws_codebuild_project.project-name-codebuild.name}"
}
}
}
stage {
name = "Deploy"
action {
name = "Deploy"
category = "Deploy"
owner = "AWS"
provider = "ElasticBeanstalk"
input_artifacts = [
"build"]
version = "1"
configuration {
ApplicationName = "${aws_elastic_beanstalk_application.project-name.name}"
EnvironmentName = "${aws_elastic_beanstalk_environment.project-name-environment.name}"
}
}
}
}
resource "aws_s3_bucket" "project-name-code-pipeline-bucket" {
bucket = "${var.project}-code-pipeline-bucket"
acl = "private"
}
resource "aws_iam_policy" "project-name-code-pipeline-bucket-access" {
name = "${var.project}-code-pipeline-bucket-access"
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect":"Allow",
"Resource": [
"${aws_s3_bucket.project-name-code-pipeline-bucket.arn}",
"${aws_s3_bucket.project-name-code-pipeline-bucket.arn}/*"
],
"Action": [
"s3:CreateBucket",
"s3:GetAccelerateConfiguration",
"s3:GetBucketAcl",
"s3:GetBucketCORS",
"s3:GetBucketLocation",
"s3:GetBucketLogging",
"s3:GetBucketNotification",
"s3:GetBucketPolicy",
"s3:GetBucketRequestPayment",
"s3:GetBucketTagging",
"s3:GetBucketVersioning",
"s3:GetBucketWebsite",
"s3:GetLifecycleConfiguration",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectTagging",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTagging",
"s3:GetObjectVersionTorrent",
"s3:GetReplicationConfiguration",
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:ListBucketVersions",
"s3:ListMultipartUploadParts",
"s3:PutObject"
]
}
]
}
POLICY
}
resource "aws_iam_role" "project-name-code-pipeline-role" {
name = "${var.project}-code-pipeline-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "codepipeline.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "project-name-code-pipeline-role-policy" {
name = "${var.project}-code-pipeline-role-policy"
role = "${aws_iam_role.project-name-code-pipeline-role.id}"
policy = <<EOF
{
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:GetBucketVersioning"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::codepipeline*",
"arn:aws:s3:::elasticbeanstalk*"
],
"Effect": "Allow"
},
{
"Action": [
"codedeploy:CreateDeployment",
"codedeploy:GetApplicationRevision",
"codedeploy:GetDeployment",
"codedeploy:GetDeploymentConfig",
"codedeploy:RegisterApplicationRevision"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"elasticbeanstalk:CreateApplicationVersion",
"elasticbeanstalk:DescribeApplicationVersions",
"elasticbeanstalk:DescribeEnvironments",
"elasticbeanstalk:DescribeEvents",
"elasticbeanstalk:UpdateEnvironment",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeScalingActivities",
"autoscaling:ResumeProcesses",
"autoscaling:SuspendProcesses",
"cloudformation:GetTemplate",
"cloudformation:DescribeStackResource",
"cloudformation:DescribeStackResources",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeStacks",
"cloudformation:UpdateStack",
"ec2:DescribeInstances",
"ec2:DescribeImages",
"ec2:DescribeAddresses",
"ec2:DescribeSubnets",
"ec2:DescribeVpcs",
"ec2:DescribeSecurityGroups",
"ec2:DescribeKeyPairs",
"elasticloadbalancing:DescribeLoadBalancers",
"rds:DescribeDBInstances",
"rds:DescribeOrderableDBInstanceOptions",
"sns:ListSubscriptionsByTopic"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"lambda:invokefunction",
"lambda:listfunctions"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"s3:ListBucket",
"s3:GetBucketPolicy",
"s3:GetObjectAcl",
"s3:PutObjectAcl",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::elasticbeanstalk*",
"Effect": "Allow"
}
],
"Version": "2012-10-17"
}
EOF
}
resource "aws_iam_role_policy_attachment" "project-name-code-pipeline-role-policy-attachment" {
policy_arn = "${aws_iam_policy.project-name-code-pipeline-bucket-access.arn}"
role = "${aws_iam_role.project-name-code-pipeline-role.name}"
}
Came across the same problem,
the issue is that you need to enable s3 access to "arn:aws:s3:::elasticbeanstalk*"
Agree that error message is kind of obscure
I would suggest checking these things to debug:
Did the template create the resources you were expecting?
Does the pipeline role have the right policy attached to it? You can run aws codepipeline get-pipeline to get the pipeline's ARN, and use the IAM console to check that the policy is what you expect.
Are you missing some elastic beanstalk permissions in the policy? I'm not sure you are, but try to change the policy to "elasticbeanstalk:*".
Try to assume the pipeline's role in the console, and try to deploy the elastic beanstalk instance, see if you get any more detailed information from the elastic beanstalk console.
I am trying to provision an ECS cluster with terraform, everything seems to work well up until I am creating the ecs service:
resource "aws_ecs_service" "ecs-service" {
name = "ecs-service"
iam_role = "${aws_iam_role.ecs-service-role.name}"
cluster = "${aws_ecs_cluster.ecs-cluster.id}"
task_definition = "${aws_ecs_task_definition.my_cluster.family}"
desired_count = 1
load_balancer {
target_group_arn = "${aws_alb_target_group.ecs-target-group.arn}"
container_port = 80
container_name = "my_cluster"
}
}
and the IAM role is:
resource "aws_iam_role" "ecs-service-role" {
name = "ecs-service-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "ecs-service-role-attachment" {
role = "${aws_iam_role.ecs-service-role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole"
}
I am getting the following error message:
aws_ecs_service.ecs-service: 1 error(s) occurred:
aws_ecs_service.ecs-service: InvalidParameterException: Unable to assume role and validate the specified targetGroupArn. Please verify
that the ECS service role being passed has the proper permissions.
In assume_role_policy, can you change the "Principal" line to as mentioned below: You are having ec2.amazonaws.com.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ecs.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}