Terraform and AWS secrets - amazon-web-services

I must be missing something in how AWS secrets can be accessed through Terraform. Here is the scenario I am struggling with:
I create an IAM user named "infra_user", create ID and secret access key for the user, download the values in plain txt.
"infra_user" will be used to authenticate via terraform to provision resources, lets say an S3 and an EC2 instance.
To protect the ID and secret key of "infra_user", I store them in AWS secrets manager.
In order to authenticate with "infra_user" in my terraform script, I will need to retrieve the secrets via the following block:
data "aws_secretsmanager_secret" "arn" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
But, to even use the data block in my script and retrieve the secrets wouldn't I need to authenticate to AWS in some other way in my provider block before I declare any resources?
If I create another user, say "tf_user", to just retrieve the secrets where would I store the access key for "tf_user"? How do I avoid this circular authentication loop?

The Terraform AWS provider documentation has a section on Authentication and Configuration and lists an order of precedence for how the provider discovers which credentials to use. You can choose the method that makes the most sense for your use case.
For example, one (insecure) method would be to set the credentials directly in the provider:
provider "aws" {
region = "us-west-2"
access_key = "my-access-key"
secret_key = "my-secret-key"
}
Or, you could set the environment variables in your shell:
export AWS_ACCESS_KEY_ID="my-access-key"
export AWS_SECRET_ACCESS_KEY="my-secret-key"
export AWS_DEFAULT_REGION="us-west-2"
now your provider block simplifies to:
provider "aws" {}
when you run terraform commands it will automatically use the credentials in your environment.
Or as yet another alternative, you can store the credentials in a profile configuration file and choose which profile to use by setting the AWS_PROFILE environment variable.
Authentication is somewhat more complex and configurable than it seems at first glance, so I'd encourage you to read the documentation.

Related

Unable to access AWS account through terraform AWS provider --

I'm facing a issue, status code is:401
"creating ec2 instance: authfailure: aws was not able to validate the provided access credentials │ status code: 401, request id: d103063f-0b26-4b84-9719-886e62b0e2b1"
the instance code:
resource "aws_instance" "test-EC2" {
instance_type = "t2.micro"
ami = "ami-07ffb2f4d65357b42"
}
I have checked the AMI region still not working
any help would be appreciated
I am looking for a way to create and destroy tokens via the management console provided by AWS. I am learning about terraform AWS provider which requires an access key, a secret key and a token.
As stated in the error message :
creating ec2 instance: authfailure: aws was not able to validate the provided access credentials │ status code: 401, request id: d103063f-0b26-4b84-9719-886e62b0e2b1".
It is clear that terraform is not able to authenticate itself using terraform AWS-provider.
You have to have a provider block in your terraform configuration to use one of the supported ways to get authenticated.
provider "aws" {
region = var.aws_region
}
In general, the following are the ways to get authenticated to AWS via the AWS-terraform provider.
Parameters in the provider configuration
Environment variables
Shared credentials files
Shared configuration files
Container credentials
Instance profile credentials and region
For more details, please take a look at: https://registry.terraform.io/providers/hashicorp/aws/latest/docs#authentication-and-configuration
By default, if you are already programmatically signed in to your AWS account AWS-terraform provider will use those credentials.
For example:
If you are using aws_access_key_id and aws_secret_access_key to authenticate yourself then you might have a profile for these credentials. you can check this info in your $HOME/.aws/credentials config file.
export the profile using the below command and you are good to go.
export AWS_PROFILE="name_of_profile_using_secrets"
If you have a SSO user for authentication
Then you might have a sso profile available in $HOME/.aws/config In that case you need to sign in with the respective aws sso profile using the below command
aws sso login --profile <sso_profile_name>
If you don't have a SSO profile yet you can also configure it using the below commands and then export it.
aws configure sso
[....] # configure your SSO
export AWS_PROFILE=<your_sso_profile>
Do you have an aws provider defined in your terraform configuration?
provider "aws" {
region = var.aws_region
profile = var.aws_profile
}
if you are running this locally, please have an IAM user profile set (use aws configure) and export that profile in your current session.
aws configure --profile xxx
export AWS_PROFILE=xxx
once you have the profile set, this should work.
If you are running this deployment in any pipleine like Github Action, you could also make use of OpenId connect to avoid any accesskey and secretkey.
Please find the detailed setup for OpenId connect here.

Terraform Cloud / Enterprise - How to use AWS Assume Roles

I would like to use AWS Assume Roles, with Terraform Cloud / Enterprise
In Terraform Open Source, you would typically just do an Assume Role, leveraging the .aws/Credential Profile on the CLI, which is the initial authentication, and performing the Assume Role:
provider "aws" {
assume_role {
role_arn = "arn:aws:iam::ACCOUNT_ID:role/ROLE_NAME"
session_name = "SESSION_NAME"
external_id = "EXTERNAL_ID"
}
}
The issue is, with Terraform Enterprise or Cloud, you cannot reference a profile, as the immutable infrastructure will not have that file in its directory.
Terraform Cloud/Enterprise needs to have an Access Key ID, and Secret Access Key, set as a variable, so its infrastructure can perform the Terraform RUN, via its Pipeline, and authenticate to what ever AWS Account you would like to provision within.
So the question is:
How can I perform an AWS Assume Role, leveraging the Access Key ID, and Secret Access Key, of the AWS account with the "Action": "sts:AssumeRole", Policy?
I would think, the below would work, however Terraform is doing the initial authentication via the AWS Credential Profile creds, for the account which has the sts:AssumeRole policy
Can Terraform look at the access_key, and secret_key, to determine what AWS account to use, when trying to assume the role, rather than use the AWS Credential Profile?
provider "aws" {
region = var.aws_region
access_key = var.access_key_id
secret_key = var.secret_access_key
assume_role {
role_arn = "arn:aws:iam::566264069176:role/RemoteAdmin"
#role_arn = "arn:aws:iam::<awsaccount>:role/<rolename>" # Do a replace in "file_update_automation.ps1"
session_name = "RemoteAdminRole"
}
}
In order to allow Terraform Cloud/Enterprise to get new Assume Role Session Tokens, it would need to use the Access_key and Secret_key, to tell it what AWS Account has the sts:assume role, linking to the member AWS Account to be provisioned, and not an AWS Creds Profile
Thank you
This can be achive if you have a business plan enabled and implement self hosted terraform agents in you infrastructure.See video.
I used the exact same provider configuration minus the explicit adding of the acces keys.
The access keys were added in the Terraform Cloud workspace as environment variables.
This is definitely possible with Terraform Enterprise (TFE) if your TFE infrastructure is also hosted in AWS and the instance profile is trusted by the role you are trying to assume.
For Terraform Cloud (TFC) it is a different story, today there is no way to create a trust between TFC and an IAM role, but we can leverage the AWS SDK's ability to pickup credentials from environment variables. You have 2 options:
Create an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variable on the workspace and set them (remember to mark the secret access key as sensitive). The provider will pickup these in the env and work the same as on your local.
If all workspaces need to use the same access and secret keys, you can set the env variables on a variable set, which will apply to all workspaces.

Hiding Secret Keys in SageMaker (Environment Variables?)

I used to hide connection credentials in environmental variables (.bash_profile). Recently working with SageMaker, I tried a similar process with the terminal available in SageMaker but I am getting the following error,
NameError: name 'DB_USER' is not defined
Is there any efficient way to hide the credentials in SageMaker?
the recommended way to handle secret storage within AWS is AWS Secrets Manager. Secrets Manager stores secret in a secured fashion as a key-value pair. The key benefit is that it allows you to administer access to those secrets via IAM roles and permission abstractions, and retrieve them with the SDK of your choice, such as boto3 for example. Secrets Manager is actually also used by Amazon SageMaker for git credential storage in the case of third-party git integrations
Extending on Olivier's answers, you could provide your Sagemaker endpoint with the proper roles in the deployment code like so
role = 'arn:aws:iam::xxxxxxxxxx:role/service-role/AmazonSageMaker-ExecutionRole-xxxxxxxxxx:role'
sagemaker_model = MXNetModel(model_data = 's3://' + bucket + '/model/model.tar.gz',
role = role,
entry_point = 'entry_point.py',
py_version='py3',
framework_version='1.4.1',
sagemaker_session = sagemaker_session)
Just remember to provide the necessary permissions in the Role you provided

Error while configuring Terraform S3 Backend

I am configuring S3 backend through terraform for AWS.
terraform {
backend "s3" {}
}
On providing the values for (S3 backend) bucket name, key & region on running "terraform init" command, getting following error
"Error configuring the backend "s3": No valid credential sources found for AWS Provider. Please see https://terraform.io/docs/providers/aws/index.html for more information on providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again."
I have declared access & secret keys as variables in providers.tf. While running "terraform init" command it didn't prompt any access key or secret key.
How to resolve this issue?
When running the terraform init you have to add -backend-config options for your credentials (aws keys). So your command should look like:
terraform init -backend-config="access_key=<your access key>" -backend-config="secret_key=<your secret key>"
I also had the same issue, the easiest and the secure way is to fix this issue is that configure the AWS profile. Even if you properly mentioned the AWS_PROFILE in your project, you have to mention it again in your backend.tf.
my problem was, I have already set up the AWS provider in the project as below and it is working properly.
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}"
}
but end of the project I was trying to configure the S3 backend configuration file. therefore I have run the command terraform init and I also got the same error message.
Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
Note that is not enough for the terraform backend configuration. you have to mention the AWS_PROFILE in the backend file as well.
Full Solution
I'm using the terraform latest version at this moment. it's v0.13.5.
please see the provider.tf
provider "aws" {
region = "${var.AWS_REGION}"
profile = "${var.AWS_PROFILE}" # lets say profile is my-profile
}
for example your AWS_PROFILE is my-profile
then your backend.tf should be as below.
terraform {
backend "s3" {
bucket = "my-terraform--bucket"
encrypt = true
key = "state.tfstate"
region = "ap-southeast-2"
profile = "my-profile" # you have to give the profile name here. not the variable("${var.AWS_PROFILE}")
}
}
then run the terraform init
I've faced a similar problem when renamed profile in AWS credentials file. Deleting .terraform folder, and running terraform init again resolved the problem.
If you have set up custom aws profile already, use the below option.
terraform init -backend-config="profile=your-profile-name"
If there is no custom profile,then make sure to add access_key and secret_key to default profile and try.
Don't - add variables for secrets. It's a really really bad practice and unnecessary.
Terraform will pick up your default AWS profile, or use whatever AWS profile you set AWS_PROFILE too. If this in AWS you should be using an instance profile. Roles can be done too.
If you hardcode the profile into your tf code then you have to have the same profile names where-ever you want to run this script and change it for every different account its run against.
Don't - do all this cmdline stuff, unless you like wrapper scripts or typing.
Do - Add yourself a remote_state.tf that looks like
terraform {
backend "s3" {
bucket = "WHAT-YOU-CALLED-YOUR-STATEBUCKET"
key = "mykey/terraform.tfstate"
region = "eu-west-1"
}
}
now when your terraform init:
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
The values in the provider aren't relevant to the perms for the remote_state and could even be different AWS accounts (or even another cloud provider).
Had the same issue and I was using export AWS_PROFILE as I always had. I checked my credentials which were correct.
Re-running aws configure fixed it for some reason.
I had same issue and below is my usecase.
AWS account 1: Management account (IAM user created here and this user will assume role into Dev and Prod account)
AWS account 2: Dev environment account (Role is created here for the trusted account in this case Management account user)
AWS account 3: Prod environment account (Role is created here for the trusted account in this case Management account user)
So I created a dev-backend.conf and prod-backend.conf file with the below content. The main point that fixed this issue is passing the "role_arn" value in S3 backend configuration
Defining below content in dev-backend.conf and prod-backend.conf files
bucket = "<your bucket name>"
key = "< your key path>"
region = "<region>"
dynamodb_table = "<db name>"
encrypt = true
profile = "< your profile>" # this profile has access key and secret key of the IAM user created in Management account
role_arn = "arn:aws:iam::<dev/prod account id>:role/<dev/prod role name >"
Terraform initialise with dev s3 bucket config from local state to s3 state
$ terraform init -reconfigure -backend-config="dev-backend.conf"
Terraform apply using dev environment variables file
$ terraform apply --var-file="dev-app.tfvars"
Terraform initialise with prod s3 bucket config from dev s3 bucket to prod s3 bucket state
$ terraform init -reconfigure -backend-config="prod-backend.conf"
Terraform apply using prod environment variables file
$ terraform apply --var-file="prod-app.tfvars"
I decided to put an end to this issue for once and for all, since there is a bunch of different topics about this same issue. This issue mainly arises because of different forms of authentication used while developing locally versus running a CI/CD pipeline. People tend to mix different authentication options together without taking into account the order of precedence.
When running locally you should definitely use the aws cli, since you don’t wanna have to set access keys every time you run a build. If you happen to work with multiple accounts locally you can tell the aws cli to switch profiles:
export AWS_PROFILE=my-profile
When you want to run (the same code) in a CI/CD pipeline (e.g. Github Actions, CircleCI), all you have to do is export the required environment variables within your build pipeline:
export AWS_ACCESS_KEY_ID="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
export AWS_REGION="eu-central-1"
This only works if you do not set any hard-coded configuration within the provider block. Because the AWS Terraform provider documentation learns us the order of authentication. Parameters in the provider configuration are evaluated first, then come environment variables.
Example:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {}
terraform {
backend "s3" {}
}
Before you plan or apply this, you'll have to initialize the backend:
terraform init \
-backend-config="bucket=${TFSTATE_BUCKET}" \
-backend-config="key=${TFSTATE_KEY}" \
-backend-config="region=${TFSTATE_REGION}"
Best practices:
When running locally use the aws cli to authenticate. When running in a build pipeline, use environment variables to authenticate.
Keep your Terraform configuration as clean as possible, so try to avoid hard-coded settings and keep the provider block empty, so that you'll be able to authenticate dynamically.
Preferably also keep the s3 backend configuration empty and initialize this configuration from environment variables or a configuration file.
The Terraform documentation recommends including .terraform.lock.hcl in your version control so that you can discuss potential changes to your external dependencies via code review.
Setting AWS_PROFILE in a build pipeline is basically useless. Most of the times you do not have the aws cli installed during runtime. If you would somehow need this, then you should probably think of splitting this into separate build pipelines.
Personally, I like to use Terragrunt as a wrapper around Terraform. One of the main reasons is that it enables you to dynamically set the backend configuration. This is not possible in plain Terraform.
If someone is using localstack, for me only worked using this tip https://github.com/localstack/localstack/issues/3982#issuecomment-1107664517
backend "s3" {
bucket = "curso-terraform"
key = "terraform.tfstate"
region = "us-east-1"
endpoint = "http://localhost:4566"
skip_credentials_validation = true
skip_metadata_api_check = true
force_path_style = true
dynamodb_table = "terraform_state"
dynamodb_endpoint = "http://localhost:4566"
encrypt = true
}
And don't forget to add the endpoint in provider:
provider "aws" {
region = "us-east-1"
skip_credentials_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
ec2 = "http://localhost:4566"
s3 = "http://localhost:4566"
dynamodb = "http://localhost:4566"
}
}
in my credentials file, 2 profile names are there one after another caused the error for me. when I removed 2nd profile name this issue was resolved.
I experienced this issue when trying to apply some Terraform changes to an existing project. The terraform commands have been working fine, and I even ran worked on the project couple of hours before the issue started.
I was encountering the following errors:
❯ terraform init
Initializing modules...
Initializing the backend...
╷
│ Error: error configuring S3 Backend: IAM Role (arn:aws:iam::950456587296:role/MyRole) cannot be assumed.
│
│ There are a number of possible causes of this - the most common are:
│ * The credentials used in order to assume the role are invalid
│ * The credentials do not have appropriate permission to assume the role
│ * The role ARN is not valid
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I had my organization VPN turned on when running the Terraform commands, and this caused the commands to fail.
Here's how I fixed it
My VPN caused the issue, this may not apply to everyone.
Turning off my VPN fixed it.

How to manage multiple IAM users on a single EC2 instance?

For different AWS services, I need different IAM users to secure the access control. Sometimes, I even need to use different IAM user credentials within a single project in a EC2 instance. What's the proper way to manage this and how I can deploy/attach these IAM user credentials to a single EC2 instance?
While I fully agree with accepted answer that using static credentials is one way of solving this problem, I would like to suggest some improvements over it (and proposed Secrets Manager).
What I would advise as architectural step forward to achieve full isolation of credentials, having them dynamic, and not stored in central place (Secrets Manager proposed above) is dockerizing application and running on AWS Elastic Container Service (ECS). This way you can assign different IAM role to different ECS Tasks.
Benefits over Secrets Manager solution
- use case of someone tampering with credentials in Secrets Manager is fully avoided, as credentials are of dynamic nature (temporary, and automatically assumed through SDKs)
Credentials are managed on AWS side for you
Only ECS Service can assume this IAM role, meaning you can't have actual person stealing the credentials, or developer connecting to production environment from his local machine with this credentials.
AWS Official Documentation for Task Roles
The normal way to provide credentials to applications running on an Amazon EC2 instance is to assign an IAM Role to the instance. Temporary credentials associated with the role when then be provided via Instance Metadata. The AWS SDKs will automatically use these credentials.
However, this only works for one set of credentials. If you wish to use more than one credential, you will need to provide the credentials in a credentials file.
The AWS credentials file can contain multiple profiles, eg:
[default]
aws_access_key_id = AKIAaaaaa
aws_secret_access_key = abcdefg
[user2]
aws_access_key_id = AKIAbbbb
aws_secret_access_key = xyzzzy
As a convenience, this can also be configured via the AWS CLI:
$ aws configure --profile user2
AWS Access Key ID [None]: AKIAbbbb
AWS Secret Access Key [None]: xyzzy
Default region name [None]: us-east-1
Default output format [None]: text
The profile to use can be set via an Environment Variable:
Linux: export AWS_PROFILE="user2"
Windows: set AWS_PROFILE="user2"
Alternatively, when calling AWS services via an SDK, simply specify the Profile to use. Here is an example with Python from Credentials — Boto 3 documentation:
session = boto3.Session(profile_name='user2')
# Any clients created from this session will use credentials
# from the [user2] section of ~/.aws/credentials.
dev_s3_client = session.client('s3')
There is an equivalent capability in the SDKs for other languages, too.