In AWS samconfig.toml, is it possible to only mask values instead of specifying the full set of config parameters for each individual configuration?values?
Here's an example:
[default]
[default.deploy]
[default.deploy.parameters]
stack_name = "my-stack"
region = "us-east-1"
...
[differentRegion.deploy.parameters]
region = "us-east-2"
When called with sam deploy --config-env differentRegion, the stack name should be my-stack and region should be us-east-2
Related
I want to store terraform state files in s3 bucket in one aws account and deploy instance changes in another aws account with role_arn usage.
This is my configuration:
providers.tf
terraform {
backend "s3" {
bucket = "bucket"
key = "tf/terraform.tfstate"
encrypt = "false"
region = "us-east-1"
profile = "s3"
role_arn = "arn:aws:iam::1111111111111:role/s3-role"
dynamodb_table = "name"
}
}
provider "aws" {
profile = "ec2"
region = "eu-north-1"
assume_role {
role_arn = "arn:aws:iam::2222222222222:role/ec2-role"
}
}
~/.aws/credentials
[s3-def]
aws_access_key_id = aaaaaaaaaa
aws_secret_access_key = sssssssss
[ec2-def]
aws_access_key_id = aaaaaaa
aws_secret_access_key = sssss
[s3]
role_arn = arn:aws:iam::1111111111:role/s3-role
region = us-east-1
source_profile = s3-def
[ec2]
role_arn = arn:aws:iam::22222222222:role/ec2-role
region = eu-north-1
source_profile = ec2-def
And when I try terraform init -migrate-state I get:
2022-08-03T17:23:21.334+0300 [INFO] Terraform version: 1.2.5
2022-08-03T17:23:21.334+0300 [INFO] Go runtime version: go1.18.1
2022-08-03T17:23:21.334+0300 [INFO] CLI args: []string{"terraform", "init", "-migrate-state"}
2022-08-03T17:23:21.334+0300 [INFO] Loading CLI configuration from /
2022-08-03T17:23:21.335+0300 [INFO] CLI command args: []string{"init", "-migrate-state"}
Initializing the backend...
2022-08-03T17:23:21.337+0300 [WARN] backend config has changed since last init
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.
2022-08-03T17:23:21.338+0300 [INFO] Attempting to use session-derived credentials
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
I just dont understand what is this error and it even possible to provide two different set of credentials to s3 and ec2?
I have a ~/.aws/config that looks something like this:
[default]
region = us-east-1
[profile foo]
region = us-east-1
[profile foo-iam-manager]
role_arn = arn:aws:iam::012345678984:role/iam-manager
source_profile = foo
[profile foo-secrets-manager]
role_arn = arn:aws:iam::012345678984:role/secrets-manager
source_profile = foo
If I run:
aws --profile foo-iam-manager iam list-roles
It works great!
But if I run:
aws --profile foo-secrets-manager secretsmanager list-secrets
Then it fails with:
You must specify a region. You can also configure your region by running "aws configure".
And indeed, if I update ~/.aws/config to look like...
[default]
region = us-east-1
[profile foo]
region = us-east-1
[profile foo-iam-manager]
role_arn = arn:aws:iam::012345678984:role/iam-manager
source_profile = foo
[profile foo-secrets-manager]
role_arn = arn:aws:iam::012345678984:role/secrets-manager
source_profile = foo
region = us-east-1
...then everything works. Why does the foo-iam-manager profile work
just fine without a region setting in the profile, but
foo-secrets-manager requires one? I thought it would pull the
appropriate value from the source_profile setting.
I tried to create a simple example in AWS environments. In the beginning, I export 2 values:
export AWS_ACCESS_KEY_ID= something
export AWS_SECRET_ACCESS_KEY= something
After that, I wrote a simple code.
provider "aws" {
region = "us-east-1"
access_key = AWS_ACCESS_KEY_ID
secret_key = AWS_SECRET_ACCESS_KEY
}
resource "aws_instance" "example" {
ami = "ami-40d28157"
instance_type = "t2.micro"
tags = {
Name = "terraform-example"
}
}
When I define values instead of parameters AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY everything works OK, but with the provided code, I see the following error
on main.tf line 4, in provider "aws":
4: secret_key = AWS_SECRET_ACCESS_KEY
A reference to a resource type must be followed by at least one attribute
access, specifying the resource name.
Some ideas on how to solve this problem?
You don't have to do anything. As explained in the terraform authentication documentation for AWS provider, terraform will automatically use the credentials in that order:
Static credentials
Environment variables
Shared credentials/configuration file
CodeBuild, ECS, and EKS Roles
EC2 Instance Metadata Service (IMDS and IMDSv2)
So once you export your keys (make sure to export them correctly):
export AWS_ACCESS_KEY_ID="something"
export AWS_SECRET_ACCESS_KEY="something"
in your config file you would just use (exemplified in the docs):
provider "aws" {
region = "us-east-1"
}
resource "aws_instance" "example" {
ami = "ami-40d28157"
instance_type = "t2.micro"
tags = {
Name = "terraform-example"
}
}
I would like to manage AWS S3 buckets with terraform and noticed that there's a region parameter for the resource.
I have an AWS provider that is configured for 1 region, and would like to use that provider to create S3 buckets in multiple regions if possible. My S3 buckets have a lot of common configuration that I don't want to repeat, so i have a local module to do all the repetitive stuff....
In mod-s3-bucket/main.tf, I have something like:
variable bucket_region {}
variable bucket_name {}
resource "aws_s3_bucket" "s3_bucket" {
region = var.bucket_region
bucket = var.bucket_name
}
And then in main.tf in the parent directory (tf root):
provider "aws" {
region = "us-east-1"
}
module "somebucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-1"
bucket_name = "useast1-bucket"
}
module "anotherbucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-2"
bucket_name = "useast2-bucket"
}
When I run a terraform apply with that, both buckets get created in us-east-1 - is this expected behaviour? My understanding is that region should make the buckets get created in different regions.
Further to that, if I run a terraform plan after bucket creation, I see the following:
~ region = "us-east-1" -> "us-east-2"
on the 1 bucket, but after an apply, the region has not changed.
I know I can easily solve this by using a 2nd, aliased AWS provider, but am asking specifically about how the region parameter is meant to work for an aws_s3_bucket resource (https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#region)
terraform v0.12.24
aws v2.64.0
I think you'll need to do something like the docs show in this example for Replication Configuration: https://www.terraform.io/docs/providers/aws/r/s3_bucket.html#using-replication-configuration
# /root/main.tf
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "us-east-2"
region = "us-east-2"
}
module "somebucket" {
source = "mod-s3-bucket"
bucket_region = "us-east-1"
bucket_name = "useast1-bucket"
}
module "anotherbucket" {
source = "mod-s3-bucket"
provider = "aws.us-east-2"
bucket_region = "us-east-2"
bucket_name = "useast2-bucket"
}
# /mod-s3-bucket/main.tf
variable provider {
type = string
default = "aws"
}
variable bucket_region {}
variable bucket_name {}
resource "aws_s3_bucket" "s3_bucket" {
provider = var.provider
region = var.bucket_region
bucket = var.bucket_name
}
I've never explicitly set the provider like that though in a resource but based on the docs it might work.
The region attribute in s3 bucket resource isn't parsed as expected, there is a bug for this:
https://github.com/terraform-providers/terraform-provider-aws/issues/592
The multiple provider approach is needed.
Terraform informs you if you try to set the region directly in the resource:
╷
│ Error: Value for unconfigurable attribute
│
│ with aws_s3_bucket.my_bucket,
│ on s3.tf line 10, in resource "aws_s3_bucket" "my_bucket":
│ 28: region = "us-east-1"
│
│ Can't configure a value for "region": its value will be decided automatically based on the result of applying this configuration.
Terraform uses the configuration of the provider, where the region is set, for managing resources. Alternatively, as already mentioned, you can use multiple configurations for the same provider by making use of the alias meta-argument.
You can optionally define multiple configurations for the same
provider, and select which one to use on a per-resource or per-module
basis. The primary reason for this is to support multiple regions for
a cloud platform; other examples include targeting multiple Docker
hosts, multiple Consul hosts, etc.
...
A provider block without an alias argument is the default
configuration for that provider. Resources that don't set the provider
meta-argument will use the default provider configuration that matches
the first word of the resource type name. link
For AWS CLI configuration and credentials files how do you connect the entries in these files? It is like my credentials work, but my config file does not, though the default profile works.
I am presently getting an error: You must specify a region. You can also configure your region by running "aws configure" when running something like:
aws ec2 describe-instances --profile devenv
However if I run the command:
aws s3api list-buckets --profile devenv
then I get a sensible response, a list of buckets.
Here are the credentials and config files:
~/.aws/credentials
[default]
aws_access_key_id = AAAAAAAAAA
aws_secret_access_key = BBBBBBBBBB
[devenv]
aws_access_key_id = CCCCCCCCCC
aws_secret_access_key = DDDDDDDDDD
[testenv]
aws_access_key_id = EEEEEEEEEE
aws_secret_access_key = FFFFFFFFFF
~/.aws/config
[default]
region = us-east-1
output = json
[devenv]
region = us-west-2
output = json
[testenv]
region = us-east-2
output = json
The problem here is the attention paid to constructing the ~/.aws/config file.
The "default" entry does not need to be prefaced by the word "profile". The non-default entries need a "profile" prefix. Because the default doesn't require the word "profile," while it works, manually constructed, or built using the aws configure command, it is not a model for the format the other entries require.
~/.aws/config
[default]
region = us-east-1
output = json
[profile devenv]
region = us-west-2
output = json
[profile testenv]
region = us-east-2
output = json