Can't use localstack dynamoDB to lock terraform state: UnrecognizedClientException - amazon-web-services

I've been trying to create a local development environment to play with terraform with localstack (https://github.com/localstack/localstack) running on docker.
I was already able to create a S3 bucket to store the terraform state, but I also wanted to simulate the DynamoDB as lock.
The configuration is:
localstack docker-compose.yml:
version: "3.2"
services:
localstack:
image: localstack/localstack:latest
container_name: localstack
ports:
- "4563-4599:4563-4599"
- "8080:8080"
environment:
- DATA_DIR=/tmp/localstack/data
- DEBUG=1
volumes:
- "./.localstack:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
First terraform:
Used as initial bootstrap to create s3 tfstate storage and DynamoDB table for tfstate lock.
provider "aws" {
region = "us-east-1"
access_key = "foo"
secret_key = "bar"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
resource "aws_s3_bucket" "terraform_state" {
bucket = "terraform-state"
acl = "private"
versioning {
enabled = true
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
lifecycle {
prevent_destroy = true
}
}
resource "aws_s3_bucket_public_access_block" "terraform_state_access" {
bucket = aws_s3_bucket.terraform_state.id
block_public_acls = true
ignore_public_acls = true
block_public_policy = true
restrict_public_buckets = true
}
resource "aws_dynamodb_table" "terraform_state_lock" {
name = "terraformlock"
read_capacity = 5
write_capacity = 5
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
Second terraform:
Creates resources and will store the state in s3 and use DynamoDB to create lock.
terraform {
backend "s3" {
bucket = "terraform-state"
key = "main/terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraformlock"
encrypt = true
}
}
provider "aws" {
region = "us-east-1"
access_key = "foo"
secret_key = "bar"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
ec2 = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
resource "aws_sqs_queue" "test" {
name = "test"
tags = {
"Environment" = "dev"
}
}
resource "aws_sns_topic" "test" {
name = "test"
display_name = "test"
}
Whenever I apply the second terraform, I get this error:
❯ terraform apply
Acquiring state lock. This may take a few moments...
Error: Error locking state: Error acquiring the state lock: 2 errors occurred:
* UnrecognizedClientException: The security token included in the request is invalid.
status code: 400, request id: UEGJV0SQ614NIEDRB93IAF0JQ7VV4KQNSO5AEMVJF66Q9ASUAAJG
* UnrecognizedClientException: The security token included in the request is invalid.
status code: 400, request id: U1IRF6CHGK7RM4SQEGVCSU699RVV4KQNSO5AEMVJF66Q9ASUAAJG
Terraform acquires a state lock to protect the state from being written
by multiple users at the same time. Please resolve the issue above and try
again. For most commands, you can disable locking with the "-lock=false"
flag, but this is not recommended.
Anyone have ever tried this or would have idea about what is causing it?

This probably happens because you are trying to use real DynamoDB, rather then from localstack. To use localstack, you have to add
dynamodb_endpoint = "http://localhost:4566"
to your backend.S3 configuration. Once you updated your backend setup, you will have to reinitialize your TF using terraform init.

Related

Github Actions is unable to create resources in AWS mentioned in a Terraform module

terraform plan shows correct result when run locally but does not create resource mentioned in module when run on GitHub actions. The other resources in root main.tf (s3) are created fine.
Root project:-
terraform {
backend "s3" {
bucket = "sd-tfstorage"
key = "terraform/backend"
region = "us-east-1"
}
}
locals {
env_name = "sandbox"
aws_region = "us-east-1"
k8s_cluster_name = "ms-cluster"
}
# Network Configuration
module "aws-network" {
source = "github.com/<name>/module-aws-network"
env_name = local.env_name
vpc_name = "msur-VPC"
cluster_name = local.k8s_cluster_name
aws_region = local.aws_region
main_vpc_cidr = "10.10.0.0/16"
public_subnet_a_cidr = "10.10.0.0/18"
public_subnet_b_cidr = "10.10.64.0/18"
private_subnet_a_cidr = "10.10.128.0/18"
private_subnet_b_cidr = "10.10.192.0/18"
}
# EKS Configuration
# GitOps Configuration
module:-
provider "aws" {
region = var.aws_region
}
locals {
vpc_name = "${var.env_name} ${var.vpc_name}"
cluster_name = "${var.cluster_name}-${var.env_name}"
}
## AWS VPC definition
resource "aws_vpc" "main" {
cidr_block = var.main_vpc_cidr
enable_dns_support = true
enable_dns_hostnames = true
tags = {
"Name" = local.vpc_name,
"kubernetes.io/cluster/${local.cluster_name}" = "shared",
}
}
When you run it locally, you are using the default aws profile to plan it.
Have you set up your github environment with the correct aws access to perform those actions?

How use localstack to test security group rules for an ec2 instance?

I would like to use Localstack for quick testing of diff security group rules. For example, I want to create an ec2 instance, create an internet gateway, and then add a security group that allows ingress to the ec2 instance only on a specific port. I'd then test it by doing a curl to the ec2 instance to see if I got the rule correct.
Is this possible with LocalStack?
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
region = "us-east-1"
access_key = "localstacktest"
secret_key = "localstacktestkey"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_use_path_style = true
endpoints {
ec2 = "http://localhost:4566"
iam = "http://localhost:4566"
s3 = "http://localhost:4566"
glacier = "http://localhost:4566"
sns = "http://localhost:4566"
organizations = "http://localhost:4566"
}
}
# ...elided...
resource "aws_security_group" "aws_ec2_sg" {
name = "aws_ec2_sg_allow_ssh"
vpc_id = aws_vpc.aws_ec2_vpc.id
ingress {
description = "Allow inbound ssh traffic"
cidr_blocks = [var.cidr_block]
from_port = var.port
protocol = "tcp"
to_port = var.port
}
#...elided...
}
resource "aws_instance" "aws_ec2_instance" {
ami = var.ami_id
instance_type = var.instance_type
vpc_security_group_ids = [aws_security_group.aws_ec2_sg.id]
tags = {
name = var.ec2_name
}
}
Is there a way to then make a curl call and have it return "ok", but if I change up the security group, to have to then fail due to security restrictions?

Access S3 from Lambda within VPC using Terraform

I have a Lambda
resource "aws_lambda_function" "api" {
function_name = "ApiController"
timeout = 10
s3_bucket = "mn-lambda"
s3_key = "mn/v1.0.0/sketch-avatar-api-1.0.0-all.jar"
handler = "io.micronaut.function.aws.proxy.MicronautLambdaHandler"
runtime = "java11"
memory_size = 1024
role = aws_iam_role.api_lambda.arn
vpc_config {
security_group_ids = [aws_security_group.lambda.id]
subnet_ids = [for subnet in aws_subnet.private: subnet.id]
}
}
Within a VPC
resource "aws_vpc" "vpc" {
cidr_block = var.vpc_cidr_block
enable_dns_support = true
enable_dns_hostnames = true
}
I created a aws_vpc_endpoint because I read that's what's need for my VPC to access S3
resource "aws_vpc_endpoint" "s3" {
vpc_id = aws_vpc.vpc.id
service_name = "com.amazonaws.${var.region}.s3"
}
I created and attached a policy allowing access to S3
resource "aws_iam_role_policy_attachment" "s3" {
role = aws_iam_role.api_lambda.name
policy_arn = aws_iam_policy.s3.arn
}
resource "aws_iam_policy" "s3" {
policy = data.aws_iam_policy_document.s3.json
}
data "aws_iam_policy_document" "s3" {
statement {
effect = "Allow"
resources = ["*"]
actions = [
"s3:*",
]
}
}
It might be worth noting that the buckets I'm trying to access is created using the aws cli but in the same region. So not with terraform.
The problem is that my Lambda is timing out when I try to read files from S3.
The full project can be found here should anyone want to take a peek.
You are creating com.amazonaws.${var.region}.s3 which is gateway VPC endpoint , which shouldn't be confused with interface VPC endpoints.
One of the key differences between the two is that the gateway type requires association with route tables. Thus you should use route_table_ids to associate your S3 gateway with route tables of your subnets.
For example, to use default main VPC route table:
resource "aws_vpc_endpoint" "s3" {
vpc_id = aws_vpc.vpc.id
service_name = "com.amazonaws.${var.region}.s3"
route_table_ids = [aws_vpc.vpc.main_route_table_id]
}
Alternatively, you can use aws_vpc_endpoint_route_table_association to do it as well.

How to create EC2 instance on LocalStack with terraform?

I am trying to run EC2 instance on LocalStack using Terraform.
After 50 minutes of trying to create the instance
I got this response from terraform apply:
Error: error getting EC2 Instance (i-cf4da152ddf3500e1) Credit
Specifications: SerializationError: failed to unmarshal error message
status code: 500, request id: caused by: UnmarshalError: failed to
unmarshal error message caused by: expected element type <Response>
but have <title>
on main.tf line 34, in resource "aws_instance" "example": 34:
resource "aws_instance" "example" {
For LocalStack and Terraform v0.12.18 I use this configuration:
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
s3_force_path_style = true
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4567"
cloudformation = "http://localhost:4581"
cloudwatch = "http://localhost:4582"
dynamodb = "http://localhost:4569"
es = "http://localhost:4578"
firehose = "http://localhost:4573"
iam = "http://localhost:4593"
kinesis = "http://localhost:4568"
lambda = "http://localhost:4574"
route53 = "http://localhost:4580"
redshift = "http://localhost:4577"
s3 = "http://localhost:4572"
secretsmanager = "http://localhost:4584"
ses = "http://localhost:4579"
sns = "http://localhost:4575"
sqs = "http://localhost:4576"
ssm = "http://localhost:4583"
stepfunctions = "http://localhost:4585"
sts = "http://localhost:4592"
ec2 = "http://localhost:4597"
}
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
When I run LocalStack with docker-compose up directly from newest github (https://github.com/localstack/localstack)
From logs I have seen that EC2 related endpoint was setup.
I appreciate any advice that would help me to run EC2 on LocalStack.
Working fine with below docker image of localstack.
docker run -it -p 4500-4600:4500-4600 -p 8080:8080 --expose 4572 localstack/localstack:0.11.1
resource "aws_instance" "web" {
ami = "ami-0d57c0143330e1fa7"
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
provider "aws" {
region = "us-east-1"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4567"
cloudformation = "http://localhost:4581"
cloudwatch = "http://localhost:4582"
dynamodb = "http://localhost:4569"
es = "http://localhost:4578"
firehose = "http://localhost:4573"
iam = "http://localhost:4593"
kinesis = "http://localhost:4568"
lambda = "http://localhost:4574"
route53 = "http://localhost:4580"
redshift = "http://LOCALHOST:4577"
s3 = "http://localhost:4572"
secretsmanager = "http://localhost:4584"
ses = "http://localhost:4579"
sns = "http://localhost:4575"
sqs = "http://localhost:4576"
ssm = "http://localhost:4583"
stepfunctions = "http://localhost:4585"
sts = "http://localhost:4592"
ec2 = "http://localhost:4597"
}
}
terraform apply
aws_instance.web: Destroying... [id=i-099392def6b574255]
aws_instance.web: Still destroying... [id=i-099392def6b574255, 10s elapsed]
aws_instance.web: Destruction complete after 10s
aws_instance.web: Creating...
aws_instance.web: Still creating... [10s elapsed]
aws_instance.web: Creation complete after 12s [id=i-9c942d138970d44a4]
Apply complete! Resources: 1 added, 0 changed, 1 destroyed.
Note: Its a dummy instance so won't be available for ssh and all. however, suitable for testing of terraform apply/destroy use case on ec2.

Break terraform aws_rds_cluster replication in the Cross Region Cluster

Using Terraform, I am able to successfully create a RDS cluster using the following config in Region 1 -
resource "aws_rds_cluster" "aurora_cluster" {
cluster_identifier = "${var.environment_name}-aurora-cluster"
database_name = "mydb"
master_username = "${var.rds_master_username}"
master_password = "${var.rds_master_password}"
backup_retention_period = 14
final_snapshot_identifier = "${var.environment_name}AuroraCluster"
apply_immediately = true
db_cluster_parameter_group_name = "${aws_rds_cluster_parameter_group.default.name}"
tags {
Name = "${var.environment_name}-Aurora-DB-Cluster"
ManagedBy = "terraform"
Environment = "${var.environment_name}"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_rds_cluster_instance" "aurora_cluster_instance" {
count = "${length(split(",", var.multi_azs))}"
identifier = "${var.environment_name}-aurora-instance-${count.index}"
cluster_identifier = "${aws_rds_cluster.aurora_cluster.id}"
instance_class = "db.t2.small"
publicly_accessible = true
apply_immediately = true
tags {
Name = "${var.environment_name}-Aurora-DB-Instance-${count.index}"
ManagedBy = "terraform"
Environment = "${var.environment_name}"
}
lifecycle {
create_before_destroy = true
}
}
output "db_primary_cluster_arn" {
rds_cluster.aurora_cluster.cluster_identifier}"
value = "${"${format("arn:aws:rds:%s:%s:cluster:%s", "${var.db_region}", "${data.aws_caller_identity.current.account_id}", "${aws_rds_cluster.aurora_cluster.cluster_identifier}")}"}"
}
and create a Cross Region replica using the below, in region 2 -
resource "aws_rds_cluster" "aurora_crr_cluster" {
cluster_identifier = "${var.environment_name}-aurora-crr-cluster"
database_name = "mydb"
master_username = "${var.rds_master_username}"
master_password = "${var.rds_master_password}"
backup_retention_period = 14
final_snapshot_identifier = "${var.environment_name}AuroraCRRCluster"
apply_immediately = true
# Referencing to the primary region's cluster
replication_source_identifier = "${var.db_primary_cluster_arn}"
tags {
Name = "${var.environment_name}-Aurora-DB-CRR-Cluster"
ManagedBy = "terraform"
Environment = "${var.environment_name}"
}
lifecycle {
create_before_destroy = true
}
}
resource "aws_rds_cluster_instance" "aurora_crr_cluster_instance" {
count = "${length(split(",", var.multi_azs))}"
identifier = "${var.environment_name}-aurora-crr-instance-${count.index}"
cluster_identifier = "${aws_rds_cluster.aurora_crr_cluster.id}"
instance_class = "db.t2.small"
publicly_accessible = true
apply_immediately = true
tags {
Name = "${var.environment_name}-Aurora-DB-Instance-${count.index}"
ManagedBy = "terraform"
Environment = "${var.environment_name}"
}
lifecycle {
create_before_destroy = true
}
}
When I want to promote the Cross Region Replica in Region 2 to a stand alone cluster - I try to remove the replication source (replication_source_identifier) from the Cross Region RDS Cluster and do "terraform apply". I see that the output from Terraform says -
module.db_replica.aws_rds_cluster.aurora_crr_cluster: Modifying... (ID: dev-aurora-crr-cluster)
replication_source_identifier: "arn:aws:rds:us-east-2:account_nbr:cluster:dev-aurora-cluster" => ""
module.db_replica.aws_rds_cluster.aurora_crr_cluster: Modifications complete after 1s (ID: dev-aurora-crr-cluster)
Apply complete! Resources: 0 added, 1 changed, 0 destroyed.
But, I see "NO CHANGE" happening to the cross region cluster on the AWS console. I still see that the replication source is existing and same and the cross region cluster is NOT updated to a "standalone" in AWS.
If I try to do the same thing via the AWS CLI -
aws rds promote-read-replica-db-cluster --db-cluster-identifier="dev-aurora-crr-cluster" --region="us-west-1"
I see that the change is triggered immediately and the Cross Region Replica is promoted to a stand alone cluster. Does anyone know where I may be doing things wrong?
or Terraform does not support promoting Cross Regional Replica's to standalone clusters. Please advice.