How to integrate terraform with atlassian/localstack? - amazon-web-services

Terraform can be configured with custom S3 endpoints and it seems that localstack can create local stacks for S3, SES, Cloudformation and few others services.
The question is what to write in Terraform configuration to use localstack's S3 endpoint?

Terraform does not officially support "AWS-workalike" systems, since they often have subtle quirks and differences relative to AWS itself. However, it is supported on a best-effort basis and may work if localstack is able to provide a sufficiently realistic impression of S3 for Terraform's purposes.
According to the localstack docs, by default the S3 API is exposed at http://localhost:4572 , so setting the custom endpoint this way may work:
provider "aws" {
endpoints {
s3 = "http://localhost:4572"
}
}
Depending on the capabilities of localstack, you may need to set some other settings:
s3_force_path_style to use a path-based addressing scheme for buckets and objects.
skip_credentials_validation, since localstack seems to lack an implementation of the AWS token service.
skip_metadata_api_check if IAM-style credentials will not be used, to prevent Terraform from trying to get credentials from the EC2 metadata API.

Building off #martin-atkins’ answer, here’s a sample Terraform file that works with Localstack:
provider "aws" {
region = "us-east-1"
access_key = "anaccesskey"
secret_key = "asecretkey"
skip_credentials_validation = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
s3 = "http://localhost:4572"
}
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "public-read"
}

Related

Terraform Elastic Beanstalk Environment - setting for encrypting S3 bucket?

I am trying to deploy a simple flask application on Elastibeanstalk using Terraform.
I am using the Terraform's default resource for ElasticBeanstalk Environment - aws_elastic_beanstalk_environment
I am able to deploy my application successfully, however during deployment ElasticBeanstalk creates an S3 bucket - elasticbeanstalk-region-account-id which is not encrypted by default.
I want to change this behaviour and make sure this bucket is encrypted when it gets created. Which setting do I use to accomplish this? I could not find the relevant setting for this. Any ideas?
by default aws beansltalk create an unencrypted bucket so aws_elastic_beanstalk_environment resource cannot do anything here
from the AWS doc :
Elastic Beanstalk doesn't turn on default encryption for the Amazon S3
bucket that it creates. This means that by default, objects are stored
unencrypted in the bucket (and are accessible only by authorized
users). Some applications require all objects to be encrypted when
they are stored—on a hard drive, in a database, etc. (also known as
encryption at rest). If you have this requirement, you can configure
your account's buckets for default encryption
so you need to enable it yourself, try the folowing
after you create the beanstalk env, get the aws s3 bucket created by beanstalk and enable server side encryption by the Terraform resource aws_s3_bucket_server_side_encryption_configuration
resource "aws_kms_key" "mykey" {
description = "This key is used to encrypt bucket objects"
deletion_window_in_days = 10
}
data "aws_s3_bucket" "mybucket" {
bucket = "elasticbeanstalk-region-account-id" # here change the value with your information
}
resource "aws_s3_bucket_server_side_encryption_configuration" "example" {
bucket = data.aws_s3_bucket.mybucket
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.mykey.arn
sse_algorithm = "aws:kms"
}
}
}

Problem Initializing terraform with s3 backend - CredentialRequiresARNError

I'm having problems initializing terraform s3 backend in following setup. This works well with terraform version 0.11.15 but fails with 0.15.5 and 1.0.7.
There are 2 files:
terraform.tf
provider "aws" {
region = "eu-west-1"
}
terraform {
backend "s3" {
}
}
resource "aws_s3_bucket" "this" {
bucket = "test-bucket"
acl = "private"
}
test-env.tfvars
encrypt = true
dynamodb_table = "terraform-test-backend"
bucket = "terraform-test-backend"
key = "terraform/deployment/test-release.tfstate"
region = "eu-west-1"
When I run terraform init -backend-config=test-env.tfvars using terraform 0.11.15 it works and I can peform terraform apply. Here is the output:
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "aws" (2.70.0)...
* provider.aws: version = "~> 2.70"
But when I try to use versions 0.15.5 and 1.0.7 I get following error:
Error: error configuring S3 Backend: Error creating AWS session: CredentialRequiresARNError: credential type source_profile requires role_arn, profile default
Any ideas how to fix it ?
A few changes were introduced with respect to the s3 backend and the way terraform checks for credentials in version >0.13.
Take a look at the the following GitHub issue or even more specifically this one. In addition its outlined in the Changelog
I believe that the issue you are facing is related to the way your aws profile is set up (check your ~/.aws/config).

How to use multiple AWS account to isolate terraform state between environment

How can I do to use s3 backend that points to a different AWS account?
In other words, I would like to have something like that:
Dev environment state on an S3 bucket in AWS account A
Stage environment state on another S3 bucket on AWS account B
Anyone can help me, please?
The documentation for Terraform's s3 backend includes a section Multi-account AWS Architecture which includes some recommendations, suggestions, and caveats for using Terraform in a multi-account AWS architecture.
That guide is far more detailed than I can reproduce here, but the key points of recommendation are:
Use a separate AWS account for Terraform and any other administrative tools you use to provision and configure your environments, so that the infrastructure that Terraform uses is entirely separate from the infrastructure that Terraform manages.
This reduces the risk of an incorrect Terraform configuration inadvertently breaking your ability to use Terraform itself (e.g. by deleting the state object, or by removing necessary IAM permissions). It also reduces the possibility for an attacker to use vulnerabilities in your main infrastructure to escalate to access to your administrative infrastructure.
Use sts:AssumeRole to indirectly access IAM roles with administrative access in each of your main environment AWS accounts.
This allows you to centralize all of your direct administrative access in a single AWS account where you can more easily audit it, reduces credentials sprawl, and also conveniently configure the AWS provider for that cross-account access (because it has assume_role support built-in).
The guide also discusses using workspaces to represent environments. That advice is perhaps more debatable given the guidance elsewhere in When to use Multiple Workspaces, but the principle of using an administrative account and IAM delegation is still applicable even if you follow this advice of having a separate root module per environment and using shared modules to represent common elements.
As with all things in system architecture, these aren't absolutes and what is best for your case will depend on your details, but hopefully the content in these two documentation sections I've linked to will help you weigh various options and decide what is best for your specific situation.
There are a few solutions to it:
provide aws profile name at the command line while running terraform init and injec terraform backend variables during runtime:
AWS_PROFILE=aws-dev terraform init -backend-config="bucket=825df6bc4eef-state" \
-backend-config="dynamodb_table=825df6bc4eef-state-lock" \
-backend-config="key=terraform-multi-account/terraform.tfstate"
or wrap this command in a Makefile as it is pretty long and forgettable.
Keep separate directories and provide the roles or your credentials or profile name even using shared-credentials
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
Terraform Workspaces
terragrunt
I don't think it is possible to have a separate S3 backend for each workspace without some hijinks at this time. If you are ok with one S3 backend in one account it's pretty easy to have different accounts associated with each workspace.
# backend.tf
terraform {
backend "s3" {
profile = "default"
bucket = "my-terraform-state"
key = "terraform-multi-account-test/terraform.state"
region = "eu-west-1"
encrypt = true
dynamodb_table = "my-terraform-state-lock"
}
}
and
# provider.tf
variable "workspace_accounts" {
type = map(string)
default = {
"sandbox" = "my-sandbox-keys"
"dev" = "default"
"prod" = "default"
}
}
provider "aws" {
shared_credentials_file = "$HOME/.aws/credentials"
profile = var.workspace_accounts[terraform.workspace]
region = "eu-west-1"
}
See https://github.com/hashicorp/terraform/issues/16627

Using AWS Terraform How to enable s3 backend authentication with assumed role MFA credentials

I provision AWS resources using Terraform using a python script that call terraform via shell
os.system('terraform apply')
The only way I found to enable terraform authentication, after enabling MFA and assuming a role, is to publish these environment variables:
os.system('export ASSUMED_ROLE="<>:botocore-session-123";
export AWS_ACCESS_KEY_ID="vfdgdsfg";
export AWS_SECRET_ACCESS_KEY="fgbdzf";
export AWS_SESSION_TOKEN="fsrfserfgs";
export AWS_SECURITY_TOKEN="fsrfserfgs"; terraform apply')
This worked OK until I configured s3 as backend, terraform action is performed but before the state can be stored in the bucket I get the standard (very confusing) exception:
Error: error configuring S3 Backend: Error creating AWS session: AssumeRoleTokenProviderNotSetError: assume role with MFA enabled, but AssumeRoleTokenProvider session option not set.
I read this excellent answer explaining that for security and other reasons backend configuration is separate.
Since I don't want to add actual secret keys to source code (as suggested by the post) I tried adding a reference to the profile and when it failed I added the actual keys just to see if it would work, which it didn't.
My working theory is that behind the scenes terraform starts another process which doesn't access or inherit the credential e environment variables.
How do I use s3 backend, with an MFA assumed role?
One must point the backend to the desired profile. In my case the same profile used for the provisioning itself.
Here is a minimal POC
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
backend "s3" {
bucket = "unique-terraform-state-dev"
key = "test"
region = "us-east-2"
profile = "the_role_assumed_in_aws_credentials"
}
}
provider "aws" {
version = "~> 3.0"
region = var.region
}
resource "aws_s3_bucket" "s3_bucket" {
bucket = var.bucket_name
}
I'm reminding that it's run by shell which has these environment variables:
os.system('export ASSUMED_ROLE="<>:botocore-session-123";
export AWS_ACCESS_KEY_ID="vfdgdsfg";
export AWS_SECRET_ACCESS_KEY="fgbdzf";
export AWS_SESSION_TOKEN="fsrfserfgs";
export AWS_SECURITY_TOKEN="fsrfserfgs"; terraform apply')

Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.