Issue when using Terraform to manage credentials that access RDS database - amazon-web-services

I created a secret via Terraform, the secret is for accessing an RDS database which is also defined in Terraform, and in the secret, I don't want to include username and password, so I created an empty secret then add the credentials manually in AWS console.
Then in the RDS definition:
resource "aws_rds_cluster" "example_db_cluster" {
cluster_identifier = local.db_name
engine = "aurora-mysql"
engine_version = "xxx"
engine_mode = "xxx"
availability_zones = [xxx]
database_name = "xxx"
master_username = jsondecode(aws_secretsmanager_secret_version.db_secret_string.secret_string)["username"]
master_password = jsondecode(aws_secretsmanager_secret_version.db_secret_string.secret_string)["password"]
.....
The problem is that when I apply terraform, because the secret is empty so Terraform won't find the string for username and password which will cause error, does anyone have a better way to implement this? Feels like it's easier to just create the secret in Secret Manager manually.

You can generate a random_password and add to your secret using a aws_secretsmanager_secret_version.
Here's an example:
resource "random_password" "default_password" {
length = 20
special = false
}
variable "secretString" {
default = {
usernae = "dbuser"
password = random_password.default_password.result
}
type = map(string)
}
resource "aws_secretsmanager_secret" "db_secret_string" {
name = "db_secret_string"
}
resource "aws_secretsmanager_secret_version" "secret" {
secret_id = aws_secretsmanager_secret.db_secret_string.id
secret_string = jsonencode(var.secretString)
}

Related

How to allow aws programatic user to create resources using assume role

I have created a policy X with ec2 and vpc full access and attached to userA. userA has console access. So, using switch role userA can create instance from console.
Now, userB has programatic access with policy Y with ec2 and vpc full access. But when I tried to create instance using Terraform got error.
Error: creating Security Group (allow-80-22): UnauthorizedOperation: You are not authorized to perform this operation. Encoded authorization failure message:
Even - aws ec2 describe-instances
gives error -
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
Anyone can help me on this.
Thanks in advance.
To be honest there are a couple of mistakes in the question itself but I have ignored them and provided a solution to
Create Resources using IAM user with only programmatic access having direct policies attached to it
In general, if you have an AWS IAM user who has programmatic access and already has the required policies attached to it then it's pretty straightforward to create any resources within the permissions. Like any normal use case.
Create Resources using IAM user with only programmatic access with assuming a role that has required policies attached to it(role only)
providers.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
## If you hardcoded the role_arn then it is not required to have two provider configs(one with hardcoded value is enough without any alias).
provider "aws" {
region = "eu-central-1"
}
provider "aws" {
alias = "ec2_and_vpc_full_access"
region = "eu-central-1"
assume_role {
role_arn = data.aws_iam_role.stackoverflow.arn
}
}
resources.tf
/*
!! Important !!
* Currently the AWS secrets(AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY) used for authentication to terraform is
* from the user which has direct AWS managed policy [IAMFullAccess] attached to it to read role arn.
*/
# If you have hardcoded role_arn in the provider config this can be ignored and no usage of alias provider config is required
## using default provider to read the role.
data "aws_iam_role" "stackoverflow" {
name = "stackoverflow-ec2-vpc-full-access-role"
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
data "aws_vpc" "default" {
provider = aws.ec2_and_vpc_full_access
default = true
}
# Using provider with the role having AWS managed policies [ec2 and vpc full access] attached
resource "aws_key_pair" "eks_jump_host" {
provider = aws.ec2_and_vpc_full_access
key_name = "ec2keypair"
public_key = file("${path.module}/../../ec2keypair.pub")
}
# Example from https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
data "aws_ami" "ubuntu" {
provider = aws.ec2_and_vpc_full_access
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
resource "aws_instance" "terraform-ec2" {
provider = aws.ec2_and_vpc_full_access
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
key_name = "ec2keypair"
security_groups = [aws_security_group.t-allow_tls.name]
}
# Using provider with the role having aws managed policies [ec2 and vpc full access] attached
resource "aws_security_group" "t-allow_tls" {
provider = aws.ec2_and_vpc_full_access
name = "allow-80-22"
description = "Allow TLS inbound traffic"
vpc_id = data.aws_vpc.default.id
ingress {
description = "http"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
}
For a full solution refer to Github Repo, I hope this is something you were looking and helps.

GCP API gateway returning 403 saying managed service "is not enabled for the project"

Trying to access a public cloud run service and not sure why I keep getting this error message ({"message":"PERMISSION_DENIED:API basic-express-api-1yy1jgrw4nwy2.apigateway.chrome-courage-336400.cloud.goog is not enabled for the project.","code":403}) when hitting the gateway default hostname path with the API key in query string. The config has a service account with the role to be able to invoke cloud run services. All required APIs are also enabled. Here is a link to my entire codebase, but below is my API Gateway specific terraform configuration.
resource "google_api_gateway_api" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control]
provider = google-beta
api_id = "basic-express-api"
}
resource "google_api_gateway_api_config" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control, google_api_gateway_api.basic_express]
provider = google-beta
api = google_api_gateway_api.basic_express.api_id
api_config_id = "basic-express-cfg"
openapi_documents {
document {
path = "api-configs/openapi-spec-basic-express.yaml"
contents = filebase64("api-configs/openapi-spec-basic-express.yaml")
}
}
lifecycle {
create_before_destroy = true
}
gateway_config {
backend_config {
google_service_account = google_service_account.apig_gateway_basic_express_sa.email
}
# https://cloud.google.com/api-gateway/docs/configure-dev-env?&_ga=2.177696806.-2072560867.1640626239#configuring_a_service_account
# when I added this terraform said that the resource already exists, so I had to tear down all infrastructure and re-provision - also did not make a difference, still getting a 404 error when trying to hit the gateway default hostname endpoint - this resource might be immutable...
}
}
resource "google_api_gateway_gateway" "basic_express" {
depends_on = [google_project_service.api_gateway, google_project_service.service_management, google_project_service.service_control, google_api_gateway_api_config.basic_express, google_api_gateway_api.basic_express]
provider = google-beta
api_config = google_api_gateway_api_config.basic_express.id
gateway_id = "basic-express-gw"
region = var.region
}
resource "google_service_account" "apig_gateway_basic_express_sa" {
account_id = "apig-gateway-basic-express-sa"
depends_on = [google_project_service.iam]
}
# "Identity to be used by gateway"
resource "google_project_iam_binding" "project" {
project = var.project_id
role = "roles/run.invoker"
members = [
"serviceAccount:${google_service_account.apig_gateway_basic_express_sa.email}"
]
}
# https://cloud.google.com/api-gateway/docs/configure-dev-env?&_ga=2.177696806.-2072560867.1640626239#configuring_a_service_account
Try:
PROJECT=[[YOUR-PROJECT]]
SERVICE="basic-express-api-1yy1jgrw4nwy2.apigateway.chrome-courage-336400.cloud.goog"
gcloud services enable ${SERVICE} \
--project=${PROJECT}
As others have pointed out, you need to enable the api service. You can do via terraform with the google_project_service resource:
resource "google_project_service" "basic_express" {
project = var.project_id
service = google_api_gateway_api.basic_express.managed_service
timeouts {
create = "30m"
update = "40m"
}
disable_dependent_services = true
}

Defining a ClusterRoleBinding for Terraform service account

So I have a GCP service account that is Kubernetes Admin and Kubernetes Cluster Admin in the GCP cloud console.
I am now trying to give this terraform service account the ClusterRole role in GKE to manage all namespaces via following terraform configuration:
data "google_service_account" "terraform" {
project = var.project_id
account_id = var.terraform_sa_email
}
# Terraform needs to manage cluster
resource "google_project_iam_member" "terraform-gke-admin" {
project = var.project_id
role = "roles/container.admin"
member = "serviceAccount:${data.google_service_account.terraform.email}"
}
# Terraform needs to manage K8S RBAC
# https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control#iam-rolebinding-bootstrap
resource "kubernetes_cluster_role_binding" "terraform_clusteradmin" {
depends_on = [
google_project_iam_member.terraform-gke-admin,
]
metadata {
name = "cluster-admin-binding-terraform"
}
role_ref {
api_group = "rbac.authorization.k8s.io"
kind = "ClusterRole"
name = "cluster-admin"
}
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = data.google_service_account.terraform.email
}
# must create a binding on unique ID of SA too
subject {
api_group = "rbac.authorization.k8s.io"
kind = "User"
name = data.google_service_account.terraform.unique_id
}
}
However, this always returns the following error:
Error: clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "client" cannot create resource "clusterrolebindings" in API group "rbac.authorization.k8s.io" at the cluster scope
│
│ with module.kubernetes[0].kubernetes_cluster_role_binding.terraform_clusteradmin,
│ on kubernetes/terraform_role.tf line 15, in resource "kubernetes_cluster_role_binding" "terraform_clusteradmin":
│ 15: resource "kubernetes_cluster_role_binding" "terraform_clusteradmin" {
Any ideas what goes wrong here?
Could this be related to using Google Groups RBAC?
authenticator_groups_config {
security_group = "gke-security-groups#${var.acl_group_domain}"
}
data "google_client_config" "provider" {}
provider "kubernetes" {
cluster_ca_certificate = module.google.cluster_ca_certificate
host = module.google.cluster_endpoint
token = data.google_client_config.provider.access_token
}

Terraform - Multiple accounts with multiple environments (regions)

I am developing the infrastructure (IaC) I want to have in AWS with Terraform. To test, I am using an EC2 instance.
This code has to be able to be deployed across multiple accounts and **multiple regions (environments) per developer **. This is an example:
account-999
developer1: us-east-2
developer2: us-west-1
developerN: us-east-1
account-666:
Staging: us-east-1
Production: eu-west-2
I've created two .tfvars variables, account-999.env.tfvars and account-666.env.tfvars with the following content:
profile="account-999" and profile="account-666" respectively
This is my main.tf which contains the aws provider with the EC2 instance:
provider "aws" {
version = "~> 2.0"
region = "us-east-1"
profile = var.profile
}
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"]
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
tags = {
Name = "HelloWorld"
}
}
And the variable.tf file:
variable "profile" {
type=string
}
variable "region" {
description = "Region by developer"
type = map
default = {
developer1 = "us-west-2"
developer2 = "us-east-2"
developerN = "ap-southeast-1"
}
}
But I'm not sure if I'm managing it well. For example, the region variable only contains the values of the account-999 account. How can I solve that?
On the other hand, with this structure, would it be possible to implement modules?
You could use a provider alias to accomplish this. More info about provider aliases can be found here.
provider "aws" {
region = "us-east-1"
}
provider "aws" {
alias = "west"
region = "us-west-2"
}
resource "aws_instance" "foo" {
provider = aws.west
# ...
}
Another way to look at is, is by using terraform workspaces. Here is an example:
terraform workspace new account-999
terraform workspace new account-666
Then this is an example of your aws credentials file:
[account-999]
aws_access_key_id=xxx
aws_secret_access_key=xxx
[account-666]
aws_access_key_id=xxx
aws_secret_access_key=xxx
A reference to that account can be used within the provider block:
provider "aws" {
region = "us-east-1"
profile = "${terraform.workspace}"
}
You could even combine both methods!

How to get private key from secret manager?

I need to store a Private Key in AWS. Because when I create an ec2 instance from AWS I need to use this primary key to auth in provisioner "remote-exec". I don't want to save in repo AWS.
It's a good idea to save a private key in Secret Manager? And then consume it?
And in the case affirmative, How to save the primary key in Secret Manager and then retrieve in TF aws_secretsmanager_secret_version?
In my case, if I validate from a file(), it's working but if I validate from a string, is failed.
connection {
host = self.private_ip
type = "ssh"
user = "ec2-user"
#private_key = file("${path.module}/key") <-- Is working
private_key = jsondecode(data.aws_secretsmanager_secret_version.secret_terraform.secret_string)["ec2_key"] <-- not working. Error: Failed to read ssh private key: no key found
}
I think the reason is due to how you store it. I verified using my own sandbox account the use of aws_secretsmanager_secret_version and it works. However, I stored it as a pain text, not json:
Then I successfuly used it as follows for an instance:
resource "aws_instance" "public" {
ami = "ami-02354e95b39ca8dec"
instance_type = "t2.micro"
key_name = "key-pair-name"
security_groups = [aws_security_group.ec2_sg.name]
provisioner "remote-exec" {
connection {
type = "ssh"
user = "ec2-user"
private_key = data.aws_secretsmanager_secret_version.example.secret_string
host = "${self.public_ip}"
}
inline = [
"ls -la"
]
}
depends_on = [aws_key_pair.key]
}