I am trying to enable managed updates with terraform but i am getting the following error
Error: ConfigurationValidationException: Configuration validation exception: Invalid option specification (Namespace: 'aws:elasticbeanstalk:managedactions', OptionName: 'ManagedActionsEnabled'): You can't enable managed platform updates when your environment uses the service-linked role 'AWSServiceRoleForElasticBeanstalk'. Select a service role that has the 'AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy' managed policy.
Terraform code:
resource "aws_elastic_beanstalk_environment" "eb_env" {
setting {
namespace = "aws:elasticbeanstalk:managedactions"
name = "ManagedActionsEnabled"
value = "True"
}
setting {
namespace = "aws:elasticbeanstalk:managedactions"
name = "ServiceRoleForManagedUpdates"
value = aws_iam_role.beanstalk_service.arn
}
setting {
namespace = "aws:elasticbeanstalk:managedactions"
name = "PreferredStartTime"
value = "Sat:04:00"
}
setting {
namespace = "aws:elasticbeanstalk:managedactions:platformupdate"
name = "UpdateLevel"
value = "patch"
}
}
resource "aws_iam_instance_profile" "beanstalk_service" {
name = "beanstalk-service-user"
role = "${aws_iam_role.beanstalk_service.name}"
}
resource "aws_iam_instance_profile" "beanstalk_ec2" {
name = "beanstalk-ec2-user"
role = "${aws_iam_role.beanstalk_ec2.name}"
}
resource "aws_iam_role" "beanstalk_service" {
name = "beanstalk-service"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "elasticbeanstalk.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "elasticbeanstalk"
}
}
}
]
}
EOF
}
resource "aws_iam_role" "beanstalk_ec2" {
name = "aws-elasticbeanstalk-ec2-role"
assume_role_policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "beanstalk_service_health" {
name = "elastic-beanstalk-service-health"
roles = ["${aws_iam_role.beanstalk_service.id}"]
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSElasticBeanstalkEnhancedHealth"
}
resource "aws_iam_policy_attachment" "beanstalk_ec2_worker" {
name = "elastic-beanstalk-ec2-worker"
roles = ["${aws_iam_role.beanstalk_ec2.id}"]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWorkerTier"
}
resource "aws_iam_service_linked_role" "managedupdates_eb" {
aws_service_name = "managedupdates.elasticbeanstalk.amazonaws.com"
}
resource "aws_iam_policy_attachment" "beanstalk_ec2_web" {
name = "elastic-beanstalk-ec2-web"
roles = ["${aws_iam_role.beanstalk_ec2.id}"]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkWebTier"
}
resource "aws_iam_policy_attachment" "beanstalk_ec2_container" {
name = "elastic-beanstalk-ec2-container"
roles = ["${aws_iam_role.beanstalk_ec2.id}"]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkMulticontainerDocker"
}
resource "aws_iam_policy_attachment" "beanstalk_service" {
name = "elastic-beanstalk-service"
roles = ["${aws_iam_role.beanstalk_service.id}"]
policy_arn = "arn:aws:iam::aws:policy/AWSElasticBeanstalkManagedUpdatesCustomerRolePolicy"
}
I did attempt to create a linked service role but that is not the solution for the error above.
setting {
namespace = "aws:elasticbeanstalk:managedactions"
name = "ServiceRoleForManagedUpdates"
value = aws_iam_service_linked_role.managedupdates_eb.arn
}
I was missing the following settings
setting {
namespace = "aws:elasticbeanstalk:environment"
name = "ServiceRole"
value = aws_iam_role.beanstalk_service.id
}
Related
I have Terraform code that almost successfully builds an AWS Batch Compute Environment with an Fsx file share mount to it.
However, despite passing the aws_fsx_lustre_file_system module a deployment type of PERSISTENT_2:
resource "aws_fsx_lustre_file_system" "storage" {
storage_capacity = 1200
subnet_ids = [var.subnet_id]
deployment_type = "PERSISTENT_2"
per_unit_storage_throughput = 250
}
the Fsx is only spun up at a scratch drive (viewable via AWS management console).
What additional information can I post here to help debug why this Terraform code is not respecting the deployment_type parameter?
Full code:
// ==========================================================
// Module input variables
// ----------------------------------------------------------
variable "region" {
type = string
}
variable "compute_environment_name" {
type = string
}
variable "job_queue_name" {
type = string
}
variable "max_vcpus" {
type = number
}
variable "vpc_id" {
type = string
}
variable "subnet_id" {
type = string
}
variable "security_group_id" {
type = string
}
variable "mounted_storage_bucket" {
type = string
}
// ==========================================================
// Components for batch processing for AWS Batch
// ----------------------------------------------------------
resource "aws_iam_role" "batch_role" {
name = "batch_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement":
[
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "batch.amazonaws.com"
}
},
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
},
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ecs.amazonaws.com"
}
},
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
}
}
]
}
EOF
tags = {
created-by = "Terraform"
}
}
# Attach the Batch policy to the Batch role
resource "aws_iam_role_policy_attachment" "batch_service_role" {
role = aws_iam_role.batch_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBatchServiceRole"
}
resource "aws_iam_role_policy_attachment" "elastic_container_service_role" {
role = aws_iam_role.batch_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
# Security Group for batch processing
resource "aws_security_group" "batch_security_group" {
name = "batch_security_group"
description = "AWS Batch Security Group for batch jobs"
vpc_id = var.vpc_id
egress {
from_port = 0
to_port = 65535
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
created-by = "Terraform"
}
}
# IAM Role for underlying EC2 instances
resource "aws_iam_role" "ec2_role" {
name = "ec2_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
tags = {
created-by = "Terraform"
}
}
# Assign the EC2 role to the EC2 profile
resource "aws_iam_instance_profile" "ec2_profile" {
name = "ec2_profile"
role = aws_iam_role.ec2_role.name
}
# Attach the EC2 container service policy to the EC2 role
resource "aws_iam_role_policy_attachment" "ec2_policy_attachment" {
role = aws_iam_role.ec2_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
# IAM Role for jobs
resource "aws_iam_role" "job_role" {
name = "job_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement":
[
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
}
}
]
}
EOF
tags = {
created-by = "Terraform"
}
}
# S3 read/write policy
resource "aws_iam_policy" "s3_policy" {
name = "s3_policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*",
"s3:Put*"
],
"Resource": [
"arn:aws:s3:::${var.mounted_storage_bucket}",
"arn:aws:s3:::${var.mounted_storage_bucket}/*"
]
}
]
}
EOF
}
resource "aws_iam_policy" "ecs_policy" {
name = "ecs_policy"
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecs:*"
],
"Resource": [
"*"
]
}
]
}
EOF
}
# Attach the policy to the job role
resource "aws_iam_role_policy_attachment" "job_policy_attachment_s3" {
role = aws_iam_role.job_role.name
policy_arn = aws_iam_policy.s3_policy.arn
}
resource "aws_iam_role_policy_attachment" "job_policy_attachment_ecs" {
role = aws_iam_role.job_role.name
policy_arn = aws_iam_policy.ecs_policy.arn
}
resource "aws_fsx_lustre_file_system" "storage" {
storage_capacity = 1200
subnet_ids = [var.subnet_id]
deployment_type = "PERSISTENT_2"
per_unit_storage_throughput = 250
}
resource "aws_fsx_data_repository_association" "storage_association" {
file_system_id = aws_fsx_lustre_file_system.storage.id
data_repository_path = "s3://${var.mounted_storage_bucket}"
file_system_path = "/data/fsx"
s3 {
auto_export_policy {
events = ["NEW", "CHANGED", "DELETED"]
}
auto_import_policy {
events = ["NEW", "CHANGED", "DELETED"]
}
}
}
resource "aws_launch_template" "launch_template" {
name = "launch_template"
update_default_version = true
user_data = base64encode(<<EOF
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/cloud-config; charset="us-ascii"
runcmd:
- region=${var.region}
- amazon-linux-extras install -y lustre2.10
- mkdir -p /data/fsx
- mount -t lustre ${aws_fsx_lustre_file_system.storage.dns_name}#tcp:fsx" /data/fsx
--==MYBOUNDARY==--
EOF
)
}
// ==========================================================
// Batch setup
// - compute environment
// - job queue
// ----------------------------------------------------------
resource "aws_iam_role" "ecs_instance_role" {
name = "ecs_instance_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "ecs_instance_role" {
role = "${aws_iam_role.ecs_instance_role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceforEC2Role"
}
resource "aws_iam_instance_profile" "ecs_instance_role" {
name = "ecs_instance_role"
role = "${aws_iam_role.ecs_instance_role.name}"
}
resource "aws_iam_role" "aws_batch_service_role" {
name = "aws_batch_service_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "batch.amazonaws.com"
}
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "aws_batch_service_role" {
role = "${aws_iam_role.aws_batch_service_role.name}"
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSBatchServiceRole"
}
resource "aws_batch_compute_environment" "batch_environment" {
compute_environment_name = var.compute_environment_name
compute_resources {
instance_role = "${aws_iam_instance_profile.ecs_instance_role.arn}"
launch_template {
launch_template_name = aws_launch_template.launch_template.name
version = "$Latest"
}
instance_type = [
"c6g.large",
"c6g.xlarge",
"c6g.2xlarge",
"c6g.4xlarge",
"c6g.8xlarge",
"c6g.12xlarge"
]
max_vcpus = 16
min_vcpus = 0
security_group_ids = [
aws_security_group.batch_security_group.id,
]
subnets = [
var.subnet_id
]
type = "EC2"
}
service_role = "${aws_iam_role.aws_batch_service_role.arn}"
type = "MANAGED"
depends_on = [aws_iam_role_policy_attachment.aws_batch_service_role]
tags = {
created-by = "Terraform"
}
}
resource "aws_batch_job_queue" "job_queue" {
name = "job_queue"
state = "ENABLED"
priority = 1
compute_environments = [
aws_batch_compute_environment.batch_environment.arn
]
depends_on = [aws_batch_compute_environment.batch_environment]
tags = {
created-by = "Terraform"
}
}
output "batch_compute_environment_id" {
value = aws_batch_compute_environment.batch_environment.id
}
output "batch_job_queue_id" {
value = aws_batch_job_queue.job_queue.id
}
output "batch_storage_mount_target" {
value = aws_fsx_lustre_file_system.storage.arn
}
output "batch_storage_mount_target_mount" {
value = aws_fsx_lustre_file_system.storage.mount_name
}
I checked all similar questions on stackoverflow but I couldn't find any decent answer for this issue. So main problem is when I applied my terraform. The instances up and run successfully and I can see the node group under EKS but I can't see any nodes under my EKS cluster. I followed this article aws article I applied below steps but didn't work. The article also mentions about aws-auth and userdata. Should I add also these things and how? (do I need userdata I already added optimized ami?)
In summary my main problems:
my instances running with same name
my instances does not join the EKS cluster
Applied steps via aws article:
I added aws optimized ami but it doesn't
solve my problem:
/aws/service/eks/optimized-ami/1.22/amazon-linux-2/recommended/image_id (New update during installation of node group its failing because of this image probably not suitable for t2.micro)
I set below parameter for vpc what article say
enable_dns_support = true
enable_dns_hostnames = true
I set the tags for my worker nodes
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "owned"
I specified userdata line in launch template. Below you can see my userdata.sh file that Im calling that from launch template
There are no nodes :(
node_grp.tf :Here my EKS worker node terraform file with policies.
resource "aws_iam_role" "eks_nodes" {
name = "eks-node-group"
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy" "node_autoscaling" {
name = "${var.base_name}-node_autoscaling_policy"
role = aws_iam_role.eks_nodes.name
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"autoscaling:DescribeTags"
],
"Resource": "*"
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_nodes.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_nodes.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_nodes.name
}
resource "aws_eks_node_group" "node" {
cluster_name = var.cluster_name
node_group_name = "${var.base_name}-node-group"
node_role_arn = aws_iam_role.eks_nodes.arn
subnet_ids = var.private_subnet_ids
scaling_config {
desired_size = var.desired_nodes
max_size = var.max_nodes
min_size = var.min_nodes
}
launch_template {
name = aws_launch_template.node_group_template.name
version = aws_launch_template.node_group_template.latest_version
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
]
}
resource "aws_launch_template" "node_group_template" {
name = "${var.cluster_name}_node_group"
instance_type = var.instance_type
user_data = base64encode(templatefile("${path.module}/userdata.sh", { API_SERVER_URL = var.cluster_endpoint, B64_CLUSTER_CA = var.ca_certificate, CLUSTER_NAME = var.cluster_name } ))
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = var.disk_size
}
}
tag_specifications {
resource_type = "instance"
tags = {
"Instance Name" = "${var.cluster_name}-node"
Name = "${var.cluster_name}-node"
key = "kubernetes.io/cluster/${var.cluster_name}"
value = "owned"
}
}
}
cluster.tf : my main eks cluster file
resource "aws_iam_role" "eks_cluster" {
name = var.cluster_name
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster.name
}
resource "aws_iam_role_policy_attachment" "AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = aws_iam_role.eks_cluster.name
}
resource "aws_eks_cluster" "eks_cluster" {
name = var.cluster_name
role_arn = aws_iam_role.eks_cluster.arn
enabled_cluster_log_types = ["api", "audit", "authenticator","controllerManager","scheduler"]
vpc_config {
security_group_ids = [var.security_group_id]
subnet_ids = flatten([ var.private_subnet_ids, var.public_subnet_ids ])
endpoint_private_access = false
endpoint_public_access = true
}
depends_on = [
aws_iam_role_policy_attachment.AmazonEKSClusterPolicy,
aws_iam_role_policy_attachment.AmazonEKSServicePolicy
]
}
resource "aws_iam_openid_connect_provider" "oidc_provider" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = var.trusted_ca_thumbprints
url = aws_eks_cluster.eks_cluster.identity[0].oidc[0].issuer
}
user-data.sh : My userdata sh file calling from launch template
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
set -ex
/etc/eks/bootstrap.sh ${CLUSTER_NAME} --b64-cluster-ca ${B64_CLUSTER_CA} --apiserver-endpoint ${API_SERVER_URL}
--==MYBOUNDARY==--\
I'm creating a test elasticsearch aws using terraform, I can't give full access from all ip addresses + how do I automatically add a username and password to log in to kibana? I read the manual s on github but I didn't understand how to do ithelp me pls
resource "aws_elasticsearch_domain" "es" {
domain_name = var.domain
elasticsearch_version = var.version_elasticsearch
cluster_config {
instance_type = var.instance_type
}
snapshot_options {
automated_snapshot_start_hour = var.automated_snapshot_start_hour
}
ebs_options {
ebs_enabled = var.ebs_volume_size > 0 ? true : false
volume_size = var.ebs_volume_size
volume_type = var.volume_type
}
tags = {
Domain = var.tag_domain
}
}
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
description = "Allows Amazon ES to manage AWS resources for a domain on your behalf."
}
resource "aws_elasticsearch_domain_policy" "main" {
domain_name = aws_elasticsearch_domain.es.domain_name
access_policies = <<POLICIES
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"*"
]
}
},
"Resource": "${aws_elasticsearch_domain.es.arn}/*""
}
]
}
POLICIES
}
The access control for AWS Opensearch is documented at https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ac.html and the kind of access you are looking to achieve is called 'fine-grained-access-control' which is explained in detail at https://docs.aws.amazon.com/opensearch-service/latest/developerguide/fgac.html.
I know this terraform resource is not documented well to explain these different access types, which is why I am sharing the modified version of your code to get your task going with additional arguments you were missing your code.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
variable "master_user_password" {
type = string
}
# Elasticsearch domain
resource "aws_elasticsearch_domain" "es_example" {
domain_name = "example-domain"
elasticsearch_version = "OpenSearch_1.0"
cluster_config {
instance_type = "t3.small.elasticsearch"
}
ebs_options {
ebs_enabled = true
volume_size = 10
volume_type = "gp2"
}
encrypt_at_rest {
enabled = true
}
node_to_node_encryption {
enabled = true
}
# This is required for using advanced security options
domain_endpoint_options {
enforce_https = true
tls_security_policy = "Policy-Min-TLS-1-2-2019-07"
}
# Authentication
advanced_security_options {
enabled = true
internal_user_database_enabled = true
master_user_options {
master_user_name = "es-admin"
master_user_password = var.master_user_password
# You can also use IAM role/user ARN
# master_user_arn = var.es_master_user_arn
}
}
tags = {
Domain = "es_example"
}
}
resource "aws_elasticsearch_domain_policy" "main" {
domain_name = aws_elasticsearch_domain.es_example.domain_name
access_policies = <<POLICIES
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "${aws_elasticsearch_domain.es_example.arn}/*"
}
]
}
POLICIES
}
This code is working for me and I was able to access OpenSearch Dashboard from my browser and was able to login using the credentials I specified in terraform code.
I struggle to have an AWS Lambda function to connect to an AWS ElasticSearch cluster.
I have an AWS Lambda function defined as the following:
resource "aws_lambda_function" "fun1" {
function_name = "fun1"
role = aws_iam_role.ia0.arn
vpc_config {
security_group_ids = local.security_group_ids
subnet_ids = local.subnet_ids
}
environment {
variables = {
ELASTICSEARCH_ENDPOINT = "https://${aws_elasticsearch_domain.es.endpoint}"
}
}
}
resource "aws_iam_role" "ia0" {
name = "lambda-exec-role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.ia0.id
policy_arn = aws_iam_policy.lambda_logging.arn
}
data "aws_iam_policy" "AWSLambdaBasicExecutionRole" {
arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "AWSLambdaBasicExecutionRole" {
role = aws_iam_role.ia0.id
policy_arn = data.aws_iam_policy.AWSLambdaBasicExecutionRole.arn
}
data "aws_iam_policy" "AWSLambdaVPCAccessExecutionRole" {
arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
resource "aws_iam_role_policy_attachment" "AWSLambdaVPCAccessExecutionRole" {
role = aws_iam_role.ia0.id
policy_arn = data.aws_iam_policy.AWSLambdaVPCAccessExecutionRole.arn
}
My VPC is defined like that:
locals {
security_group_ids = [aws_security_group.sg0.id]
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
resource "aws_vpc" "vpc0" {
cidr_block = "10.0.0.0/16"
enable_dns_hostnames = true
enable_dns_support = true
}
resource "aws_subnet" "private_a" {
vpc_id = aws_vpc.vpc0.id
cidr_block = cidrsubnet(aws_vpc.vpc0.cidr_block, 2, 1)
availability_zone = "eu-west-3a"
}
resource "aws_subnet" "private_b" {
vpc_id = aws_vpc.vpc0.id
cidr_block = cidrsubnet(aws_vpc.vpc0.cidr_block, 2, 2)
availability_zone = "eu-west-3b"
}
resource "aws_security_group" "sg0" {
vpc_id = aws_vpc.vpc0.id
}
Finally my cluster looks like that:
resource "aws_elasticsearch_domain" "es" {
domain_name = "es"
elasticsearch_version = "7.9"
cluster_config {
instance_count = 2
zone_awareness_enabled = true
instance_type = "t2.small.elasticsearch"
}
domain_endpoint_options {
enforce_https = true
tls_security_policy = "Policy-Min-TLS-1-2-2019-07"
}
ebs_options {
ebs_enabled = true
volume_size = 10
}
vpc_options {
security_group_ids = local.security_group_ids
subnet_ids = local.subnet_ids
}
}
resource "aws_iam_role_policy" "rp0" {
name = "rp0"
role = aws_iam_role.ia0.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"es:*"
],
"Resource": [
"${aws_elasticsearch_domain.es.arn}",
"${aws_elasticsearch_domain.es.arn}/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"${aws_subnet.private_a.cidr_block}",
"${aws_subnet.private_b.cidr_block}"
]
}
}
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeVpcs",
"ec2:DescribeVpcAttribute",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeNetworkInterfaces",
"ec2:CreateNetworkInterface",
"ec2:CreateNetworkInterfacePermission",
"ec2:DeleteNetworkInterface"
],
"Resource": [
"*"
]
}
]
}
EOF
}
Despite of that I still get this answer
Response
{ responseStatus = Status {statusCode = 403, statusMessage = "Forbidden"}
, responseVersion = HTTP/1.1
, responseHeaders =
[("Date","xxx")
,("Content-Type","application/json")
,("Content-Length","72")
,("Connection","keep-alive")
,("x-amzn-RequestId","xxx")
,("Access-Control-Allow-Origin","*")
]
, responseBody = "{\"Message\":\"User: anonymous is not authorized to perform: es:ESHttpPut\"}\"
, responseCookieJar = CJ {expose = []}, responseClose' = ResponseClose
}"
According to AWS documentation using CIDR should be sufficient, but in practice, something is missing.
Thanks in advance for your help.
you need to sign the request before making a http call to tell Elastic search from who is initiating the request. I don't know which programming language you are using, here is what we can do in NodeJs
For simple http call
let request = new (AWS as any).HttpRequest(endpoint, 'us-east-1');
let credentials = new AWS.EnvironmentCredentials('AWS');
let signers = new (AWS as any).Signers.V4(request, 'es');
signers.addAuthorization(credentials, new Date());
if you are using a package like #elastic/elasticsearch, you can combine http-aws-es to create a client which creates a signature , might look something like
let options = {
hosts: [ yourHost ],
connectionClass: require('http-aws-es'),
awsConfig: new AWS.Config({ region: 'us-east-1', credentials: new AWS.EnvironmentCredentials('AWS') })
};
client = require('elasticsearch').Client(options);
I've been trying to create a terraform script for creating a cognito user pool and identity pool with a linked auth and unauth role, but I can't find a good example of doing this. Here is what I have so far:
cognito.tf:
resource "aws_cognito_user_pool" "pool" {
name = "Sample User Pool"
admin_create_user_config {
allow_admin_create_user_only = false
}
/* More stuff here, not included*/
}
resource "aws_cognito_user_pool_client" "client" {
name = "client"
user_pool_id = "${aws_cognito_user_pool.pool.id}"
generate_secret = true
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"]
}
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "SampleIdentityPool"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = "${aws_cognito_user_pool_client.id}"
provider_name = ""
server_side_token_check = true
}
}
So, I want to tack an auth role and an unauth role to this, but I'm still trying to get my head around how to define and link IAM roles in terraform, but here is what I have so far:
resource "aws_cognito_identity_pool_roles_attachment" "main" {
identity_pool_id = "${aws_cognito_identity_pool.main.id}"
roles {
"authenticated" = <<EOF
{
actions = ["sts:AssumeRoleWithWebIdentity"]
principals {
type = "Federated"
identifiers = ["cognito-identity.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "cognito-identity.amazonaws.com:aud"
values = ["${aws_cognito_identity_pool.main.id}"]
}
condition {
test = "ForAnyValue:StringLike"
variable = "cognito-identity.amazonaws.com:amr"
values = ["authenticated"]
}
}
EOF
"unauthenticated" = <<EOF
{
actions = ["sts:AssumeRoleWithWebIdentity"]
principals {
type = "Federated"
identifiers = ["cognito-identity.amazonaws.com"]
}
condition {
test = "StringEquals"
variable = "cognito-identity.amazonaws.com:aud"
values = ["${aws_cognito_identity_pool.main.id}"]
}
}
EOF
}
}
This however, doesn't work. It creates the pools and client correctly, but doesn't attach anything to auth/unauth roles. I can't figure out what I'm missing, and I can't find any examples of how to do this correctly other than by using the AWS console. Any help on working this out correctly in terraform would be much appreciated!
After messing around with this for a few days, i finally figured it out. I was merely getting confused with "Assume Role Policy" and "Policy". Once I had that sorted out, it worked. Here is (roughly) what I have now. I'll put it here in hopes that it will save someone trying to figure this out for the first time a lot of grief.
For User Pool:
resource "aws_cognito_user_pool" "pool" {
name = "Sample Pool"
/* ... Lots more attributes */
}
For User Pool Client:
resource "aws_cognito_user_pool_client" "client" {
name = "client"
user_pool_id = aws_cognito_user_pool.pool.id
generate_secret = true
explicit_auth_flows = ["ADMIN_NO_SRP_AUTH"]
}
For Identity Pool:
resource "aws_cognito_identity_pool" "main" {
identity_pool_name = "SampleIdentities"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = aws_cognito_user_pool_client.client.id
provider_name = aws_cognito_user_pool.pool.endpoint
server_side_token_check = true
}
}
Attach Roles to Identity Pool:
resource "aws_cognito_identity_pool_roles_attachment" "main" {
identity_pool_id = aws_cognito_identity_pool.main.id
roles = {
authenticated = aws_iam_role.auth_iam_role.arn
unauthenticated = aws_iam_role.unauth_iam_role.arn
}
}
And, finally, the roles and policies:
resource "aws_iam_role" "auth_iam_role" {
name = "auth_iam_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role" "unauth_iam_role" {
name = "unauth_iam_role"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Federated": "cognito-identity.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy" "web_iam_unauth_role_policy" {
name = "web_iam_unauth_role_policy"
role = aws_iam_role.unauth_iam_role.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Action": "*",
"Effect": "Deny",
"Resource": "*"
}
]
}
EOF
}
Note: Edited for updated terraform language changes that don't require "${...}" around references any more