Set Redis ElastiCache eviction policy via Terraform - amazon-web-services

I want to set my Redis cluster on AWS ElastiCache to the LRU eviction mode. The version of my Redis cluster is 5.0.6.
I have looked through the documentation of the Terraform aws_elasticache_replication_group resource but I cannot find any attribute to set eviction policy. As far as I know the default policy is no eviction.
How can I change the eviction policy in Terraform?

ElastiCache configuration is done via the aws_elasticache_parameter_group resource. You can then specify any of the parameters that are allowed by ElastiCache.
Looking at the available parameters you would want to set the maxmemory-policy but it's worth noting that the default isn't to not evict (noeviction) and instead defaults to volatile-lru in all current versions of Redis ElastiCache which might be what you need anyway. If instead you wanted to use allkeys-lru then you would do something like the following:
resource "aws_elasticache_parameter_group" "this" {
name = "cache-params"
family = "redis5.0"
parameter {
name = "maxmemory-policy"
value = "allkeys-lru"
}
}

Related

Need to enable backup replication feature in AWS RDS through Terraform

Need to enable backup replication feature in AWS RDS - oracle through terraform. So do we have any attributes from terraform side for that particular feature?
The only argument on the Terraform side is aws_db_instance's replicate_source_db
replicate_source_db - (Optional) Specifies that this resource is a Replicate database, and to use this value as the source database. This correlates to the identifier of another Amazon RDS Database to replicate (if replicating within a single region) or ARN of the Amazon RDS Database to replicate (if replicating cross-region). Note that if you are creating a cross-region replica of an encrypted database you will also need to specify a kms_key_id.
The replicate_source_db should have the ID or ARN of the source database.
resource "aws_db_instance" "oracle" {
# ... other arguments
}
resource "aws_db_instance" "oracle_replicant" {
# ... other arguments
replicate_source_db = aws_db_instance.oracle.id
}
Reference
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-read-replicas.html

Terraform not saving state of ECS Cluster containerInsights setting

This is my template:
resource "aws_ecs_cluster" "doesntmatter" {
name = var.doesntmatter_name
capacity_providers = ["FARGATE", "FARGATE_SPOT"]
setting {
name = "containerInsights"
value = "enabled"
}
tags = var.tags
}
When I run it. It properly creates cluster and sets containerInsights to enabled.
But when I run terrafrom again. It wants to change this property as if it wasn't set before.
It doesn't matter how many times I run it. It still thinks it needs to change it every deployment.
Additionally, the terraform state show resName does show that this setting is saved in state file.
It's a bug that is resolved with v3.57.0 of the Terraform AWS Provider (released yesterday).
Amazon ECS is making a change to the ECS Describe-Clusters API. Previously, the response to a successful ECS Describe-Clusters API request included the cluster settings by default. This behavior was incorrect since, as documented here (https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-clusters.html), cluster settings is an optional field that s hould only be included when explicitly requested by the customer. With the change, ECS will no longer surface the cluster settings field in response to the Describe-Clusters API by default. Customers can continue to use the --include SETTINGS flag with the Describe-Clusters API to receive the cluster settings.
Tracking bug: https://github.com/hashicorp/terraform-provider-aws/issues/20684

How to use multiple AWS account to isolate terraform state between environment

How can I do to use s3 backend that points to a different AWS account?
In other words, I would like to have something like that:
Dev environment state on an S3 bucket in AWS account A
Stage environment state on another S3 bucket on AWS account B
Anyone can help me, please?
The documentation for Terraform's s3 backend includes a section Multi-account AWS Architecture which includes some recommendations, suggestions, and caveats for using Terraform in a multi-account AWS architecture.
That guide is far more detailed than I can reproduce here, but the key points of recommendation are:
Use a separate AWS account for Terraform and any other administrative tools you use to provision and configure your environments, so that the infrastructure that Terraform uses is entirely separate from the infrastructure that Terraform manages.
This reduces the risk of an incorrect Terraform configuration inadvertently breaking your ability to use Terraform itself (e.g. by deleting the state object, or by removing necessary IAM permissions). It also reduces the possibility for an attacker to use vulnerabilities in your main infrastructure to escalate to access to your administrative infrastructure.
Use sts:AssumeRole to indirectly access IAM roles with administrative access in each of your main environment AWS accounts.
This allows you to centralize all of your direct administrative access in a single AWS account where you can more easily audit it, reduces credentials sprawl, and also conveniently configure the AWS provider for that cross-account access (because it has assume_role support built-in).
The guide also discusses using workspaces to represent environments. That advice is perhaps more debatable given the guidance elsewhere in When to use Multiple Workspaces, but the principle of using an administrative account and IAM delegation is still applicable even if you follow this advice of having a separate root module per environment and using shared modules to represent common elements.
As with all things in system architecture, these aren't absolutes and what is best for your case will depend on your details, but hopefully the content in these two documentation sections I've linked to will help you weigh various options and decide what is best for your specific situation.
There are a few solutions to it:
provide aws profile name at the command line while running terraform init and injec terraform backend variables during runtime:
AWS_PROFILE=aws-dev terraform init -backend-config="bucket=825df6bc4eef-state" \
-backend-config="dynamodb_table=825df6bc4eef-state-lock" \
-backend-config="key=terraform-multi-account/terraform.tfstate"
or wrap this command in a Makefile as it is pretty long and forgettable.
Keep separate directories and provide the roles or your credentials or profile name even using shared-credentials
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
Terraform Workspaces
terragrunt
I don't think it is possible to have a separate S3 backend for each workspace without some hijinks at this time. If you are ok with one S3 backend in one account it's pretty easy to have different accounts associated with each workspace.
# backend.tf
terraform {
backend "s3" {
profile = "default"
bucket = "my-terraform-state"
key = "terraform-multi-account-test/terraform.state"
region = "eu-west-1"
encrypt = true
dynamodb_table = "my-terraform-state-lock"
}
}
and
# provider.tf
variable "workspace_accounts" {
type = map(string)
default = {
"sandbox" = "my-sandbox-keys"
"dev" = "default"
"prod" = "default"
}
}
provider "aws" {
shared_credentials_file = "$HOME/.aws/credentials"
profile = var.workspace_accounts[terraform.workspace]
region = "eu-west-1"
}
See https://github.com/hashicorp/terraform/issues/16627

How to obtain AWS certificate from different region in terraform? [duplicate]

I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions?
You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts.
So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this:
provider "aws" {
# ...
}
# Cloudfront ACM certs must exist in US-East-1
provider "aws" {
alias = "cloudfront-acm-certs"
region = "us-east-1"
}
You'd then refer to them like so:
data "aws_acm_certificate" "ssl_certificate" {
provider = aws.cloudfront-acm-certs
...
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn
...
}
}
So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this:
provider "aws" {
# ...
}
# Assume a role in the DNS account so we can add records in the zone that lives there
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
And refer to it like so:
data "aws_route53_zone" "selected" {
provider = aws.dns
name = "test.com."
}
resource "aws_route53_record" "www" {
provider = aws.dns
zone_id = data.aws_route53_zone.selected.zone_id
name = "www.${data.aws_route53_zone.selected.name"
...
}
Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
I would recommend also combining your solution with Terraform workspaces:
Named workspaces allow conveniently switching between multiple
instances of a single configuration within its single backend. They
are convenient in a number of situations, but cannot solve all
problems.
A common use for multiple workspaces is to create a parallel, distinct
copy of a set of infrastructure in order to test a set of changes
before modifying the main production infrastructure. For example, a
developer working on a complex set of infrastructure changes might
create a new temporary workspace in order to freely experiment with
changes without affecting the default workspace.
Non-default workspaces are often related to feature branches in
version control. The default workspace might correspond to the
"master" or "trunk" branch, which describes the intended state of
production infrastructure. When a feature branch is created to develop
a change, the developer of that feature might create a corresponding
workspace and deploy into it a temporary "copy" of the main
infrastructure so that changes can be tested without affecting the
production infrastructure. Once the change is merged and deployed to
the default workspace, the test infrastructure can be destroyed and
the temporary workspace deleted.
AWS S3 is in the list of the supported backends.
It is very easy to use (similar to working with git branches) and combine it with the selected AWS account.
terraform workspace list
dev
* prod
staging
A few references regarding configuring the AWS provider to work with multiple account:
https://terragrunt.gruntwork.io/docs/features/work-with-multiple-aws-accounts/
https://assets.ctfassets.net/hqu2g0tau160/5Od5r9RbuEYueaeeycUIcK/b5a355e684de0a842d6a3a483a7dc7d3/devopscon-V2.1.pdf

Deploying to multiple AWS accounts with Terraform?

I've been looking for a way to be able to deploy to multiple AWS accounts simultaneously in Terraform and coming up dry. AWS has the concept of doing this with Stacks but I'm not sure if there is a way to do this in Terraform? If so what would be some solutions?
You can read more about the Cloudformation solution here.
You can define multiple provider aliases which can be used to run actions in different regions or even different AWS accounts.
So to perform some actions in your default region (or be prompted for it if not defined in environment variables or ~/.aws/config) and also in US East 1 you'd have something like this:
provider "aws" {
# ...
}
# Cloudfront ACM certs must exist in US-East-1
provider "aws" {
alias = "cloudfront-acm-certs"
region = "us-east-1"
}
You'd then refer to them like so:
data "aws_acm_certificate" "ssl_certificate" {
provider = aws.cloudfront-acm-certs
...
}
resource "aws_cloudfront_distribution" "cloudfront" {
...
viewer_certificate {
acm_certificate_arn = data.aws_acm_certificate.ssl_certificate.arn
...
}
}
So if you want to do things across multiple accounts at the same time then you could assume a role in the other account with something like this:
provider "aws" {
# ...
}
# Assume a role in the DNS account so we can add records in the zone that lives there
provider "aws" {
alias = "dns"
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
And refer to it like so:
data "aws_route53_zone" "selected" {
provider = aws.dns
name = "test.com."
}
resource "aws_route53_record" "www" {
provider = aws.dns
zone_id = data.aws_route53_zone.selected.zone_id
name = "www.${data.aws_route53_zone.selected.name"
...
}
Alternatively you can provide credentials for different AWS accounts in a number of other ways such as hardcoding them in the provider or using different Terraform variables, AWS SDK specific environment variables or by using a configured profile.
I would recommend also combining your solution with Terraform workspaces:
Named workspaces allow conveniently switching between multiple
instances of a single configuration within its single backend. They
are convenient in a number of situations, but cannot solve all
problems.
A common use for multiple workspaces is to create a parallel, distinct
copy of a set of infrastructure in order to test a set of changes
before modifying the main production infrastructure. For example, a
developer working on a complex set of infrastructure changes might
create a new temporary workspace in order to freely experiment with
changes without affecting the default workspace.
Non-default workspaces are often related to feature branches in
version control. The default workspace might correspond to the
"master" or "trunk" branch, which describes the intended state of
production infrastructure. When a feature branch is created to develop
a change, the developer of that feature might create a corresponding
workspace and deploy into it a temporary "copy" of the main
infrastructure so that changes can be tested without affecting the
production infrastructure. Once the change is merged and deployed to
the default workspace, the test infrastructure can be destroyed and
the temporary workspace deleted.
AWS S3 is in the list of the supported backends.
It is very easy to use (similar to working with git branches) and combine it with the selected AWS account.
terraform workspace list
dev
* prod
staging
A few references regarding configuring the AWS provider to work with multiple account:
https://terragrunt.gruntwork.io/docs/features/work-with-multiple-aws-accounts/
https://assets.ctfassets.net/hqu2g0tau160/5Od5r9RbuEYueaeeycUIcK/b5a355e684de0a842d6a3a483a7dc7d3/devopscon-V2.1.pdf