I struggle to have an AWS Lambda function to connect to an AWS ElasticSearch cluster.
I have an AWS Lambda function defined as the following:
resource "aws_lambda_function" "fun1" {
function_name = "fun1"
role = aws_iam_role.ia0.arn
vpc_config {
security_group_ids = local.security_group_ids
subnet_ids = local.subnet_ids
}
environment {
variables = {
ELASTICSEARCH_ENDPOINT = "https://${aws_elasticsearch_domain.es.endpoint}"
}
}
}
resource "aws_iam_role" "ia0" {
name = "lambda-exec-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.ia0.id
policy_arn = aws_iam_policy.lambda_logging.arn
}
data "aws_iam_policy" "AWSLambdaBasicExecutionRole" {
arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "AWSLambdaBasicExecutionRole" {
role = aws_iam_role.ia0.id
policy_arn = data.aws_iam_policy.AWSLambdaBasicExecutionRole.arn
}
data "aws_iam_policy" "AWSLambdaVPCAccessExecutionRole" {
arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
resource "aws_iam_role_policy_attachment" "AWSLambdaVPCAccessExecutionRole" {
role = aws_iam_role.ia0.id
policy_arn = data.aws_iam_policy.AWSLambdaVPCAccessExecutionRole.arn
}
My VPC is defined like that:
locals {
security_group_ids = [aws_security_group.sg0.id]
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
resource "aws_vpc" "vpc0" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_subnet" "private_a" {
vpc_id = aws_vpc.vpc0.id
cidr_block = cidrsubnet(aws_vpc.vpc0.cidr_block, 2, 1)
availability_zone = "eu-west-3a"
}
resource "aws_subnet" "private_b" {
vpc_id = aws_vpc.vpc0.id
cidr_block = cidrsubnet(aws_vpc.vpc0.cidr_block, 2, 2)
availability_zone = "eu-west-3b"
}
resource "aws_security_group" "sg0" {
vpc_id = aws_vpc.vpc0.id
}
Finally my cluster looks like that:
resource "aws_elasticsearch_domain" "es" {
domain_name = "es"
elasticsearch_version = "7.9"
cluster_config {
instance_count = 2
zone_awareness_enabled = true
instance_type = "t2.small.elasticsearch"
}
domain_endpoint_options {
enforce_https = true
tls_security_policy = "Policy-Min-TLS-1-2-2019-07"
}
ebs_options {
ebs_enabled = true
volume_size = 10
}
vpc_options {
security_group_ids = local.security_group_ids
subnet_ids = local.subnet_ids
}
}
resource "aws_iam_role_policy" "rp0" {
name = "rp0"
role = aws_iam_role.ia0.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"es:*"
],
"Resource": [
"${aws_elasticsearch_domain.es.arn}",
"${aws_elasticsearch_domain.es.arn}/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"${aws_subnet.private_a.cidr_block}",
"${aws_subnet.private_b.cidr_block}"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeNetworkInterfaces",
"ec2:CreateNetworkInterface",
"ec2:CreateNetworkInterfacePermission",
"ec2:DeleteNetworkInterface"
],
"Resource": [
"*"
]
}
]
}
EOF
}
Despite of that I still get this answer
Response
{ responseStatus = Status {statusCode = 403, statusMessage = "Forbidden"}
, responseVersion = HTTP/1.1
, responseHeaders =
[("Date","xxx")
,("Content-Type","application/json")
,("Content-Length","72")
,("Connection","keep-alive")
,("x-amzn-RequestId","xxx")
,("Access-Control-Allow-Origin","*")
]
, responseBody = "{\"Message\":\"User: anonymous is not authorized to perform: es:ESHttpPut\"}\"
, responseCookieJar = CJ {expose = []}, responseClose' = ResponseClose
}"
According to AWS documentation using CIDR should be sufficient, but in practice, something is missing.
Thanks in advance for your help.
you need to sign the request before making a http call to tell Elastic search from who is initiating the request. I don't know which programming language you are using, here is what we can do in NodeJs
For simple http call
let request = new (AWS as any).HttpRequest(endpoint, 'us-east-1');
let credentials = new AWS.EnvironmentCredentials('AWS');
let signers = new (AWS as any).Signers.V4(request, 'es');
signers.addAuthorization(credentials, new Date());
if you are using a package like #elastic/elasticsearch, you can combine http-aws-es to create a client which creates a signature , might look something like
let options = {
hosts: [ yourHost ],
connectionClass: require('http-aws-es'),
awsConfig: new AWS.Config({ region: 'us-east-1', credentials: new AWS.EnvironmentCredentials('AWS') })
};
client = require('elasticsearch').Client(options);
Related
I am attempting to create a CICD pipeline using AWS CodePipeline to deploy magento to an EC2 instance and terraform to provision the infrastructure on AWS. After running the script, it fails at DOWNLOAD_SOURCE phase during the build stage. After much enquiry I realised it has to do with VPC settings. In the AWS documentation it says to use private subnet ids and present the arn instead of the just the subnet id. I have done all that but it is still failing. What do I do?
Here is the codebuild code
resource "aws_codebuild_project" "o4bproject_codebuild" {
name = "${local.name}-codebuild-project"
description = "${local.name}_codebuild_project"
build_timeout = 60
queued_timeout = 480
depends_on = [aws_iam_role.o4bproject_codebuild]
service_role = aws_iam_role.o4bproject_codebuild.arn
> artifacts {
type = "CODEPIPELINE"
encryption_disabled = false
name = "${local.name}-codebuild-project"
override_artifact_name = false
packaging = "NONE"
}
> cache {
type = "S3"
location = aws_s3_bucket.o4bproject-codebuild.bucket
}
> environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/standard:5.0"
type = "LINUX_CONTAINER"
image_pull_credentials_type = "CODEBUILD"
> environment_variable {
name = "PARAMETERSTORE"
value = aws_ssm_parameter.env.name
type = "PARAMETER_STORE"
}
}
> logs_config {
cloudwatch_logs {
group_name = "log-group"
stream_name = "log-stream"
}
s3_logs {
status = "ENABLED"
location = "${aws_s3_bucket.o4bproject-codebuild.bucket}/build-log"
}
}
> source {
type = "CODEPIPELINE"
buildspec = file("${abspath(path.root)}/buildspec.yml")
location = REDACTED
git_clone_depth = 1
git_submodules_config {
fetch_submodules = true
}
}
source_version = "master"
> vpc_config {
vpc_id = aws_vpc.o4bproject.id
subnets = [
aws_subnet.o4bproject-private[0].id,
aws_subnet.o4bproject-private[1].id,
]
security_group_ids = [
aws_security_group.o4bproject_dev_ec2_private_sg.id,
]
}
tags = local.common_tags
}`
Here is the codebuild role and policy
resource "aws_iam_role" "o4bproject_codebuild" {
name = "${local.name}-codebuild-role"
description = "Allows CodeBuild to call AWS services on your behalf."
path = "/service-role/"
assume_role_policy = jsonencode(
{
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "codebuild.amazonaws.com"
}
Sid = ""
},
]
Version = "2012-10-17"
}
)
tags = local.common_tags
}
resource` "aws_iam_role_policy" "o4bproject_codebuild" {
role = aws_iam_role.o4bproject_codebuild.name
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCodebuildCreateLogs",
"Effect": "Allow",
"Resource": [
"${aws_cloudwatch_log_group.o4bproject-codebuild.name}:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Sid": "AllowCodeBuildGitActions",
"Effect": "Allow",
"Action": [
"github:GitPull"
],
"Resource": "*"
},
{
"Sid": "AllowCodeBuildGitActions",
"Effect": "Allow",
"Action": [
"github:GitPush"
],
"Resource": "*",
"Condition": {
"StringEqualsIfExists": {
"codecommit:References": [
"refs/heads/build"
]
}
}
},
{
"Sid": "SNSTopicListAccess",
"Effect": "Allow",
"Action": [
"sns:ListTopics",
"sns:GetTopicAttributes"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeDhcpOptions",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeVpcs"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterfacePermission"
],
"Resource": [
"arn:aws:ec2:${var.aws_region}:REDACTED:network-interface/*"
],
"Condition": {
"StringEquals": {
"ec2:Subnet": [
"${aws_subnet.o4bproject-private[0].arn}",
"${aws_subnet.o4bproject-private[1].arn}"
],
"ec2:AuthorizedService": "codebuild.amazonaws.com"
}
}
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"${aws_s3_bucket.o4bproject-codebuild.arn}",
"${aws_s3_bucket.o4bproject-codebuild.arn}/*"
]
}
]
}
POLICY
}`
Here is my code for vpc-network
resource "aws_vpc" "o4bproject" {
cidr_block = var.vpc_cidr_block
instance_tenancy = "default"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "${local.name}-vpc"
Environment = "dev"
}
}
> resource "aws_subnet" "o4bproject-private" {
count = var.item_count
vpc_id = aws_vpc.o4bproject.id
availability_zone = var.private_subnet_availability_zone[count.index]
cidr_block = var.private_subnet_cidr_block[count.index]
tags = {
Name = "${local.name}-private-subnet-${count.index}"
Environment = "dev"
}
}
> resource "aws_subnet" "o4bproject-public" {
count = var.item_count
vpc_id = aws_vpc.o4bproject.id
availability_zone = var.public_subnet_availability_zone[count.index]
cidr_block = var.public_subnet_cidr_block[count.index]
tags = {
Name = "${local.name}-public-subnet-${count.index}"
Environment = "dev"
}
}
> resource "aws_route_table" "public-route-table" {
vpc_id = aws_vpc.o4bproject.id
tags = {
Name = "${local.name}-public-route-table"
Environment = "dev"
}
}
> resource "aws_route_table" "private-route-table" {
vpc_id = aws_vpc.o4bproject.id
tags = {
Name = "${local.name}-private-route-table"
Environment = "dev"
}
}
> resource "aws_route_table_association" "public-route-association" {
count = var.item_count
route_table_id = aws_route_table.public-route-table.id
subnet_id = aws_subnet.o4bproject-public[count.index].id
}
> resource "aws_route_table_association" "private-route-association" {
count = var.item_count
route_table_id = aws_route_table.private-route-table.id
subnet_id = aws_subnet.o4bproject-private[count.index].id
}
> resource "aws_internet_gateway" "o4bproject-igw" {
vpc_id = aws_vpc.o4bproject.id
tags = {
Name = "${local.name}-igw"
Environment = "dev"
}
}
> resource "aws_route" "public-internet-gateway-route" {
route_table_id = aws_route_table.public-route-table.id
gateway_id = aws_internet_gateway.o4bproject-igw.id
destination_cidr_block = "0.0.0.0/0"
}
> resource "aws_eip" "o4bproject-eip-nat-gateway" {
vpc = true
count = var.item_count
associate_with_private_ip = REDACTED
depends_on = [aws_internet_gateway.o4bproject-igw]
tags = {
Name = "${local.name}-eip-${count.index}"
Environment = "dev"
}
}
> resource "aws_nat_gateway" "o4bproject-nat-gateway" {
allocation_id = aws_eip.o4bproject-eip-nat-gateway[count.index].id
count = var.item_count
subnet_id = aws_subnet.o4bproject-public[count.index].id
depends_on = [aws_eip.o4bproject-eip-nat-gateway]
tags = {
Name = "${local.name}-nat-gw-${count.index}"
Environment = "dev"
}
}
I checked all similar questions on stackoverflow but I couldn't find any decent answer for this issue. So main problem is when I applied my terraform. The instances up and run successfully and I can see the node group under EKS but I can't see any nodes under my EKS cluster. I followed this article aws article I applied below steps but didn't work. The article also mentions about aws-auth and userdata. Should I add also these things and how? (do I need userdata I already added optimized ami?)
In summary my main problems:
my instances running with same name
my instances does not join the EKS cluster
Applied steps via aws article:
I added aws optimized ami but it doesn't
solve my problem:
/aws/service/eks/optimized-ami/1.22/amazon-linux-2/recommended/image_id (New update during installation of node group its failing because of this image probably not suitable for t2.micro)
I set below parameter for vpc what article say
enable_dns_support = true
enable_dns_hostnames = true
I set the tags for my worker nodes
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "owned"
I specified userdata line in launch template. Below you can see my userdata.sh file that Im calling that from launch template
There are no nodes :(
node_grp.tf :Here my EKS worker node terraform file with policies.
resource "aws_iam_role" "eks_nodes" {
name = "eks-node-group"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy" "node_autoscaling" {
name = "${var.base_name}-node_autoscaling_policy"
role = aws_iam_role.eks_nodes.name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:DescribeTags"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_nodes.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_nodes.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_nodes.name
}
resource "aws_eks_node_group" "node" {
cluster_name = var.cluster_name
node_group_name = "${var.base_name}-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = var.private_subnet_ids
scaling_config {
desired_size = var.desired_nodes
max_size = var.max_nodes
min_size = var.min_nodes
}
launch_template {
name = aws_launch_template.node_group_template.name
version = aws_launch_template.node_group_template.latest_version
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
]
}
resource "aws_launch_template" "node_group_template" {
name = "${var.cluster_name}_node_group"
instance_type = var.instance_type
user_data = base64encode(templatefile("${path.module}/userdata.sh", { API_SERVER_URL = var.cluster_endpoint, B64_CLUSTER_CA = var.ca_certificate, CLUSTER_NAME = var.cluster_name } ))
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = var.disk_size
}
}
tag_specifications {
resource_type = "instance"
tags = {
"Instance Name" = "${var.cluster_name}-node"
Name = "${var.cluster_name}-node"
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "owned"
}
}
}
cluster.tf : my main eks cluster file
resource "aws_iam_role" "eks_cluster" {
name = var.cluster_name
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.eks_cluster.name
}
resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = aws_iam_role.eks_cluster.arn
enabled_cluster_log_types = ["api", "audit", "authenticator","controllerManager","scheduler"]
vpc_config {
security_group_ids = [var.security_group_id]
subnet_ids = flatten([ var.private_subnet_ids, var.public_subnet_ids ])
endpoint_private_access = false
endpoint_public_access = true
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.AmazonEKSServicePolicy
]
}
resource "aws_iam_openid_connect_provider" "oidc_provider" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = var.trusted_ca_thumbprints
url = aws_eks_cluster.eks_cluster.identity[0].oidc[0].issuer
}
user-data.sh : My userdata sh file calling from launch template
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ${CLUSTER_NAME} --b64-cluster-ca ${B64_CLUSTER_CA} --apiserver-endpoint ${API_SERVER_URL}
--==MYBOUNDARY==--\
I'm creating a test elasticsearch aws using terraform, I can't give full access from all ip addresses + how do I automatically add a username and password to log in to kibana? I read the manual s on github but I didn't understand how to do ithelp me pls
resource "aws_elasticsearch_domain" "es" {
domain_name = var.domain
elasticsearch_version = var.version_elasticsearch
cluster_config {
instance_type = var.instance_type
}
snapshot_options {
automated_snapshot_start_hour = var.automated_snapshot_start_hour
}
ebs_options {
ebs_enabled = var.ebs_volume_size > 0 ? true : false
volume_size = var.ebs_volume_size
volume_type = var.volume_type
}
tags = {
Domain = var.tag_domain
}
}
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
description = "Allows Amazon ES to manage AWS resources for a domain on your behalf."
}
resource "aws_elasticsearch_domain_policy" "main" {
domain_name = aws_elasticsearch_domain.es.domain_name
access_policies = <<POLICIES
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"*"
]
}
},
"Resource": "${aws_elasticsearch_domain.es.arn}/*""
}
]
}
POLICIES
}
The access control for AWS Opensearch is documented at https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ac.html and the kind of access you are looking to achieve is called 'fine-grained-access-control' which is explained in detail at https://docs.aws.amazon.com/opensearch-service/latest/developerguide/fgac.html.
I know this terraform resource is not documented well to explain these different access types, which is why I am sharing the modified version of your code to get your task going with additional arguments you were missing your code.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
variable "master_user_password" {
type = string
}
# Elasticsearch domain
resource "aws_elasticsearch_domain" "es_example" {
domain_name = "example-domain"
elasticsearch_version = "OpenSearch_1.0"
cluster_config {
instance_type = "t3.small.elasticsearch"
}
ebs_options {
ebs_enabled = true
volume_size = 10
volume_type = "gp2"
}
encrypt_at_rest {
enabled = true
}
node_to_node_encryption {
enabled = true
}
# This is required for using advanced security options
domain_endpoint_options {
enforce_https = true
tls_security_policy = "Policy-Min-TLS-1-2-2019-07"
}
# Authentication
advanced_security_options {
enabled = true
internal_user_database_enabled = true
master_user_options {
master_user_name = "es-admin"
master_user_password = var.master_user_password
# You can also use IAM role/user ARN
# master_user_arn = var.es_master_user_arn
}
}
tags = {
Domain = "es_example"
}
}
resource "aws_elasticsearch_domain_policy" "main" {
domain_name = aws_elasticsearch_domain.es_example.domain_name
access_policies = <<POLICIES
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "${aws_elasticsearch_domain.es_example.arn}/*"
}
]
}
POLICIES
}
This code is working for me and I was able to access OpenSearch Dashboard from my browser and was able to login using the credentials I specified in terraform code.
I'm having trouble trying to set this infrastructure: I need an Aurora serverless cluster running PostgreSQL and access it using Secrets Manager. I also want to rotate the secret using a Lambda function every X amount of days.
However, I can't get the Lambda function to connect to the RDS cluster even with the original credentials. What am I doing wrong? Is it not possible to do this?
This is my Terraform code:
# --- LOCALS
# ---
locals {
db_role_name = "MYAPP-app-backuprestore"
db_name = "MYAPP-rds-${var.region}"
option_group_name = "MYAPP-rds-optiongroup"
security_group_name = "MYAPP-vpc-scg"
db_subnet_group_name = "MYAPP-vpc-sng"
rotation_lambda_function_name = "MYAPP-secretsmanager-rotationlambda-${var.region}"
rotation_lambda_role_name = "MYAPP-app-rotationlambda"
dbi_credentials_secret_name = "MYAPP/rds/master-credentials"
dbi_name = "MYAPP-rds-${var.region}"
backup_bucket_name = var.backup_bucket_name != "" ? var.backup_bucket_name : "MYAPP-data-${var.region}-${var.target_account_id}"
backup_location = var.backup_object_prefix == "" ? local.backup_bucket_name : "${local.backup_bucket_name}/${var.backup_object_prefix}"
common_tags = {
"owner:technical" = var.technical_owner
"owner:business" = var.business_owner
migrated = "False"
environment = var.environment
}
db_tags = merge(
local.common_tags,
{
c7n_start = 1
confidentiality = "restricted"
Name = local.db_name
}
)
role_tags = merge(
local.common_tags,
{
Name = local.db_role_name
}
)
option_group_tags = merge(
local.common_tags,
{
Name = local.option_group_name
}
)
security_group_tags = merge(
local.common_tags,
{
Name = local.security_group_name
}
)
db_subnet_group_tags = merge(
local.common_tags,
{
Name = local.db_subnet_group_name
}
)
rotation_lambda_tags = merge(
local.common_tags,
{
Name = local.rotation_lambda_function_name
}
)
rotation_lambda_role_tags = merge(
local.common_tags,
{
Name = local.rotation_lambda_role_name
}
)
dbi_credentials_secret_tags = merge(
local.common_tags,
{
Name = local.dbi_credentials_secret_name
}
)
}
# --- OPTION GROUP
# ---
resource "aws_iam_role" "rds_restore_role" {
name = local.db_role_name
tags = local.role_tags
assume_role_policy = <<-POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "rds.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
POLICY
}
resource "aws_iam_role_policy" "rds_backup_policy" {
role = aws_iam_role.rds_restore_role.id
policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListContentInBackupBucket",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::${local.backup_bucket_name}",
"Condition": {
"StringLike": {
"s3:prefix": [
"${var.backup_object_prefix}",
"${var.backup_object_prefix}/*"
]
}
}
},
{
"Sid": "GetBucketLocation",
"Effect": "Allow",
"Action": "s3:GetBucketLocation",
"Resource": "arn:aws:s3:::${local.backup_bucket_name}"
},
{
"Sid": "ReadWriteObjects",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": "arn:aws:s3:::${local.backup_location}/*"
},
{
"Sid": "CheckAccessToBucketAndObjects",
"Effect": "Allow",
"Action": "s3:HeadBucket",
"Resource": "*"
}
]
}
EOF
}
# --- SECURITY GROUP
# ---
data "aws_vpcs" "vpc_ids" {}
resource "aws_security_group" "vpc_security_group" {
name = local.security_group_name
description = ""
tags = local.security_group_tags
vpc_id = tolist(data.aws_vpcs.vpc_ids.ids)[0]
ingress {
description = "Allow incoming connections from network"
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = [var.dbi_secgroup]
self = true
}
# Allows rotation Lambda to reach Secrets Manager API
egress {
description = "Allow outgoing connections"
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
# --- SUBNET
# ---
data "aws_subnet_ids" "private_subnets" {
vpc_id = tolist(data.aws_vpcs.vpc_ids.ids)[0]
filter {
name = "tag:aws:cloudformation:logical-id"
values = ["PrivateSubnet1", "PrivateSubnet2"]
}
}
resource "aws_db_subnet_group" "db_subnet_group" {
name = local.db_subnet_group_name
subnet_ids = data.aws_subnet_ids.private_subnets.ids
tags = local.db_subnet_group_tags
}
# --- AURORA SERVERLESS
resource "aws_rds_cluster" "default" {
cluster_identifier = local.db_name
vpc_security_group_ids = [ aws_security_group.vpc_security_group.id ]
db_subnet_group_name = aws_db_subnet_group.db_subnet_group.id
engine_mode = "serverless"
engine = "aurora-postgresql"
engine_version = "10.7"
master_username = var.dbi_user_name
master_password = var.dbi_password
backup_retention_period = 30
storage_encrypted = true
apply_immediately = true
database_name = "foobar"
scaling_configuration {
auto_pause = true
max_capacity = 2
min_capacity = 2
seconds_until_auto_pause = 500
}
skip_final_snapshot = true
lifecycle {
ignore_changes = [
"engine_version",
]
}
}
# --- SECRET MANAGER
resource "aws_secretsmanager_secret" "db_instance_credentials_secret" {
name = local.dbi_credentials_secret_name
description = ""
tags = local.dbi_credentials_secret_tags
}
resource "aws_secretsmanager_secret_version" "db_instance_credentials_secret_values" {
secret_id = aws_secretsmanager_secret.db_instance_credentials_secret.id
secret_string = jsonencode({
username: var.dbi_user_name,
password: var.dbi_password,
engine: "postgres",
host: aws_rds_cluster.default.endpoint,
port: 5432,
dbInstanceIdentifier: aws_rds_cluster.default.id
})
}
resource "aws_ssm_parameter" "db_instance_credentials_secret_name" {
name = "MYAPP/dbi_credentials_secret_arn"
type = "String"
value = aws_secretsmanager_secret.db_instance_credentials_secret.arn
}
# -- Rotation
resource "aws_secretsmanager_secret_rotation" "db_instance_credentials_rotation" {
secret_id = aws_secretsmanager_secret.db_instance_credentials_secret.id
rotation_lambda_arn = aws_lambda_function.secret_rotation_lambda.arn
rotation_rules {
automatically_after_days = var.lambda_rotation_days
}
}
# --- LAMBDA
# ---
resource "aws_lambda_function" "secret_rotation_lambda" {
filename = "lambda/${var.rotation_lambda_filename}.zip"
function_name = local.rotation_lambda_function_name
role = aws_iam_role.lambda_rotation_role.arn
handler = "lambda_function.lambda_handler"
source_code_hash = filebase64sha256("lambda/${var.rotation_lambda_filename}.zip")
runtime = "python3.7"
vpc_config {
subnet_ids = data.aws_subnet_ids.private_subnets.ids
security_group_ids = [aws_security_group.vpc_security_group.id]
}
timeout = 300
description = ""
environment {
variables = {
SECRETS_MANAGER_ENDPOINT = "https://secretsmanager.${var.region}.amazonaws.com"
}
}
tags = local.rotation_lambda_tags
}
resource "aws_iam_role" "lambda_rotation_role" {
name = local.rotation_lambda_role_name
tags = local.rotation_lambda_role_tags
assume_role_policy = <<-EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "policy_AWSLambdaBasicExecutionRole" {
role = aws_iam_role.lambda_rotation_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "policy_AWSLambdaVPCAccessExecutionRole" {
role = aws_iam_role.lambda_rotation_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
data "aws_iam_policy_document" "SecretsManagerRDSAuroraServerlessRotationSingleUserRolePolicy" {
statement {
actions = [
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DetachNetworkInterface",
]
resources = ["*"]
}
statement {
actions = [
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:PutSecretValue",
"secretsmanager:UpdateSecretVersionStage",
]
resources = [
"arn:aws:secretsmanager:${var.region}:${var.target_account_id}:secret:*",
]
condition {
test = "StringEquals"
variable = "secretsmanager:resource/AllowRotationLambdaArn"
values = [aws_lambda_function.secret_rotation_lambda.arn]
}
}
statement {
actions = ["secretsmanager:GetRandomPassword"]
resources = ["*"]
}
}
resource "aws_iam_policy" "SecretsManagerRDSAuroraRotationSingleUserRolePolicy" {
path = "/"
policy = data.aws_iam_policy_document.SecretsManagerRDSAuroraRotationSingleUserRolePolicy.json
}
resource "aws_iam_role_policy_attachment" "SecretsManagerRDSAuroraRotationSingleUserRolePolicy" {
role = aws_iam_role.lambda_rotation_role.name
policy_arn = aws_iam_policy.SecretsManagerRDSAuroraRotationSingleUserRolePolicy.arn
}
resource "aws_lambda_permission" "allow_secret_manager_call_roation_lambda" {
function_name = aws_lambda_function.secret_rotation_lambda.function_name
statement_id = "AllowExecutionSecretManager"
action = "lambda:InvokeFunction"
principal = "secretsmanager.amazonaws.com"
}
The lambda/ folder has the code I downloaded from a Lambda function I set up manually to do the rotation, which I later deleted. The lambda_function.py code fails at this point:
def set_secret(service_client, arn, token):
# First try to login with the pending secret, if it succeeds, return
pending_dict = get_secret_dict(service_client, arn, "AWSPENDING", token)
conn = get_connection(pending_dict)
if conn:
conn.close()
logger.info("setSecret: AWSPENDING secret is already set as password in PostgreSQL DB for secret arn %s." % arn)
return
logger.info("setSecret: unable to log with AWSPENDING credentials")
curr_dict = get_secret_dict(service_client, arn, "AWSCURRENT")
# Now try the current password
conn = get_connection(curr_dict)
if not conn:
# If both current and pending do not work, try previous
logger.info("setSecret: unable to log with AWSCURRENT credentials")
try:
conn = get_connection(get_secret_dict(service_client, arn, "AWSPREVIOUS"))
except service_client.exceptions.ResourceNotFoundException:
logger.info("setSecret: Unable to log with AWSPREVIOUS credentials")
conn = None
It can't connect to the RDS cluster with any of the secrets, even though I can connect from the console using those credentials (username and password).
Roles:
resource "aws_iam_role" "ecs-ec2-role" {
name = "${var.app_name}-ecs-ec2-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_instance_profile" "ecs-ec2-role" {
name = "${var.app_name}-ecs-ec2-role"
role = "${aws_iam_role.ecs-ec2-role.name}"
}
resource "aws_iam_role_policy" "ecs-ec2-role-policy" {
name = "${var.app_name}-ecs-ec2-role-policy"
role = "${aws_iam_role.ecs-ec2-role.id}"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:CreateCluster",
"ecs:DeregisterContainerInstance",
"ecs:DiscoverPollEndpoint",
"ecs:Poll",
"ecs:RegisterContainerInstance",
"ecs:StartTelemetrySession",
"ecs:Submit*",
"ecs:StartTask",
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
}
# ecs service role
resource "aws_iam_role" "ecs-service-role" {
name = "${var.app_name}-ecs-service-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": [
"ecs.amazonaws.com"
]
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "ecs-service-attach" {
role = "${aws_iam_role.ecs-service-role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole"
}
data "aws_iam_policy_document" "aws_secrets_policy" {
version = "2012-10-17"
statement {
sid = ""
effect = "Allow"
actions = ["secretsmanager:GetSecretValue"]
resources = [
var.aws_secrets
]
}
}
resource "aws_iam_policy" "aws_secrets_policy" {
name = "aws_secrets_policy"
policy = "${data.aws_iam_policy_document.aws_secrets_policy.json}"
}
resource "aws_iam_role_policy_attachment" "aws_secrets_policy" {
role = aws_iam_role.ecs-ec2-role.name
policy_arn = aws_iam_policy.aws_secrets_policy.arn
}
ECS:
resource "aws_ecs_cluster" "main" {
name = "${var.app_name}-cluster"
}
data "template_file" "app" {
template = file("./templates/ecs/app.json.tpl")
vars = {
app_name = var.app_name
app_image = var.app_image
app_host = var.app_host
endpoint_protocol = var.endpoint_protocol
app_port = var.app_port
container_cpu = var.container_cpu
container_memory = var.container_memory
aws_region = var.aws_region
aws_secrets = var.aws_secrets
}
}
resource "aws_ecs_task_definition" "app" {
family = "${var.app_name}-task"
execution_role_arn = aws_iam_role.ecs-ec2-role.arn
cpu = var.container_cpu
memory = var.container_memory
container_definitions = data.template_file.app.rendered
}
resource "aws_ecs_service" "main" {
name = "${var.app_name}-service"
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.app.arn
desired_count = var.app_count
iam_role = aws_iam_role.ecs-service-role.arn
depends_on = [aws_iam_role_policy_attachment.ecs-service-attach]
load_balancer {
target_group_arn = aws_lb_target_group.app.id
container_name = var.app_name
container_port = var.app_port
}
}
Autoscaling:
data "aws_ami" "latest_ecs" {
most_recent = true
filter {
name = "name"
values = ["*amazon-ecs-optimized"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["591542846629"] # AWS
}
resource "aws_launch_configuration" "ecs-launch-configuration" {
// name = "${var.app_name}-launch-configuration"
image_id = data.aws_ami.latest_ecs.id
instance_type = var.instance_type
iam_instance_profile = aws_iam_instance_profile.ecs-ec2-role.id
security_groups = [aws_security_group.ecs_tasks.id]
root_block_device {
volume_type = "standard"
volume_size = 100
delete_on_termination = true
}
lifecycle {
create_before_destroy = true
}
associate_public_ip_address = "false"
key_name = "backend-dev"
#
# register the cluster name with ecs-agent which will in turn coord
# with the AWS api about the cluster
#
user_data = data.template_file.autoscaling_user_data.rendered
}
data "template_file" "autoscaling_user_data" {
template = file("./templates/ecs/autoscaling_user_data.tpl")
vars = {
ecs_cluster = aws_ecs_cluster.main.name
}
}
#
# need an ASG so we can easily add more ecs host nodes as necessary
#
resource "aws_autoscaling_group" "ecs-autoscaling-group" {
name = "${var.app_name}-autoscaling-group"
max_size = "4"
min_size = "2"
health_check_grace_period = 300
desired_capacity = "2"
vpc_zone_identifier = [aws_subnet.private[0].id, aws_subnet.private[1].id]
launch_configuration = aws_launch_configuration.ecs-launch-configuration.name
health_check_type = "ELB"
tag {
key = "Name"
value = var.app_name
propagate_at_launch = true
}
}
resource "aws_autoscaling_policy" "demo-cluster" {
name = "${var.app_name}-ecs-autoscaling-polycy"
policy_type = "TargetTrackingScaling"
estimated_instance_warmup = "90"
adjustment_type = "ChangeInCapacity"
autoscaling_group_name = aws_autoscaling_group.ecs-autoscaling-group.name
target_tracking_configuration {
predefined_metric_specification {
predefined_metric_type = "ASGAverageCPUUtilization"
}
target_value = 40.0
}
}
Cluster name was added to an Instances successfully via User Data:
$ cat /etc/ecs/ecs.config
ECS_CLUSTER=mercure-cluster
But I'm getting an error:
service mercure-service was unable to place a task because no
container instance met all of its requirements. Reason: No Container
Instances were found in your cluster.
ecs-agent.log:
$ grep 'WARN\|ERROR' ecs-agent.log.2019-10-24-10
2019-10-24T10:36:45Z [WARN] Error getting valid credentials (AKID ): NoCredentialProviders: no valid providers in chain. Deprecated.
2019-10-24T10:36:45Z [ERROR] Unable to register as a container instance with ECS: NoCredentialProviders: no valid providers in chain. Deprecated.
2019-10-24T10:36:45Z [ERROR] Error registering: NoCredentialProviders: no valid providers in chain. Deprecated.
ecs-init.log:
$ grep 'WARN\|ERROR' ecs-init.log
2019-10-24T10:36:45Z [WARN] ECS Agent failed to start, retrying in 547.77941ms
2019-10-24T10:36:46Z [WARN] ECS Agent failed to start, retrying in 1.082153551s
2019-10-24T10:36:50Z [WARN] ECS Agent failed to start, retrying in 2.066145821s
2019-10-24T10:36:55Z [WARN] ECS Agent failed to start, retrying in 4.235010051s