Access denied issue with GCP SQL created using Terraform - google-cloud-platform

I created following resources using the terraform
MySql Instance
DB
User
To create these resources I am using GoogleCloudPlatform/sql-db/google//modules/mysql v13.0.1
The Terraform template is as follow:
module "backend-db" {
source = "GoogleCloudPlatform/sql-db/google//modules/mysql"
version = "13.0.1"
name = "${var.env}-database"
random_instance_name = true
database_version = "MYSQL_5_7"
project_id = module.dev-project.project_id
region = "australia-southeast1"
zone = "australia-southeast1-a"
tier = "db-n1-standard-1"
additional_databases = var.additional_databases
additional_users = var.additional_users
deletion_protection = false
ip_configuration = {
ipv4_enabled = true
private_network = null
require_ssl = true
allocated_ip_range = null
authorized_networks = local.authorized_networks
}
database_flags = [
{
name = "log_bin_trust_function_creators"
value = "on"
},
]
}
The Instance has public IP and in authorized network, I added my IP and I am trying to access this via MySql Workbench.
The Error is get is Access denied for user '<USER>'#'<MY_IP>' (using password: YES)
Note: All works well when I create all the resource from the console (UI)
Please help.

Related

How do I connect rds with elastic beanstalk in terraform

I have the code to create the elastic beanstalk with terraform and here is the code I found in terraform docs to create an rds instance
resource "aws_db_instance" "default" {
allocated_storage = 10
db_name = "mydb"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
username = "foo"
password = "foobarbaz"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
The problem is that I can't find an example of how to connect this db to elastic beanstalk
I think the setting option should be the way to go here, i.e., you probably do not need a separate resource for creating the DB. Based on the AWS docs [1], and using the terraform examples [2], it should be something like:
resource "aws_elastic_beanstalk_application" "tftest" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
name = "tf-test-name"
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
setting {
namespace = "aws:rds:dbinstance"
name = "DBAllocatedStorage"
value = "10"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBDeletionPolicy"
value = "Delete"
}
setting {
namespace = "aws:rds:dbinstance"
name = "HasCoupledDatabase"
value = "true"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngine"
value = "mysql"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBEngineVersion"
value = "5.7"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBInstanceClass"
value = "db.t3.micro"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBPassword"
value = "foobarbaz"
}
setting {
namespace = "aws:rds:dbinstance"
name = "DBUser"
value = "foo"
}
}
However, I am not sure if the parameter_group_name can be set here.
EDIT: Answer updated to create a DB instance with the ElasticBeanstalk environment. However, make sure to understand this part about HasCoupledDatabase setting from the docs:
Note: If you toggle this value back to true after decoupling the previous database, Elastic Beanstalk creates a new database with the previous database option settings. However, to maintain the security of your environment, it doesn't retain the existing DBUser and DBPassword settings. You need to specify DBUser and DBPassword again.
[1] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-rdsdbinstance
[2] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/elastic_beanstalk_environment#option-settings

Terraform - Cyclic dependency issue on GCP

I am provisioning multiple resources on GCP including a Cloud SQL (Postgres) DB and one VM instance. I am struggling with a cyclic dependency on Terraform during terraform apply as:
Cloud SQL (Postgres) needs the IP of the VM for IP whitelisting
The VM uses a start-up script that requires the Public IP of the Postgres DB
Hence, the cyclic dependency... Do you have any suggestion to tackle this in Terraform?
File that creates the GCP VM (includes a startup script that requires the IP of the Postgres DB)
data "template_file" "startup_script_airbyte" {
template = file("${path.module}/sh_scripts/airbyte.sh")
vars = {
db_public_ip = "${google_sql_database_instance.postgres.public_ip_address}"
db_name_prefix = "${var.db_name}"
db_user = "${var.db_user}"
db_password = "${var.db_password}"
}
}
resource "google_compute_instance" "airbyte_instance" {
name = "${google_project.data_project.project_id}-airbyte"
machine_type = local.airbyte_machine_type
project = google_project.data_project.project_id
metadata_startup_script = data.template_file.startup_script_airbyte.rendered #file("./sh_scripts/airbyte.sh")
allow_stopping_for_update = true
depends_on = [
google_project_service.data_project_services,
]
boot_disk {
initialize_params {
image = "ubuntu-2004-focal-v20210415"
size = 50
type = "pd-balanced"
}
}
network_interface {
network = "default"
access_config {
network_tier = "PREMIUM"
}
}
service_account {
email = google_service_account.airbyte_sa.email
scopes = ["cloud-platform"]
}
}
Script that creates the Postgres DB (requires IP of the VM above)
resource "google_sql_database_instance" "postgres" {
name = "postgres-instance-${random_id.db_name_suffix.hex}"
project = google_project.data_project.project_id
database_version = "POSTGRES_13"
settings{
tier = "db-f1-micro"
backup_configuration {
enabled = true
start_time = "02:00"
}
database_flags {
name = "cloudsql.iam_authentication"
value = "on"
}
database_flags {
name = "max_connections"
value = 30000
}
#Whitelisting the IPs of the GCE VMs in Postgres
ip_configuration {
ipv4_enabled = "true"
authorized_networks {
name = "${google_compute_instance.airbyte_instance.name}"
value = "${google_compute_instance.airbyte_instance.network_interface.0.access_config.0.nat_ip}"
}
}
}
}
One way to overcome this would be to get static public IP, using google_compute_address. You do this before you create your instance, and then attach it to the instance.
This way the IP can be whitelisted in Cloud SQL, before the instance is created.
The correct solution is to install the Cloud SQL Auth Proxy in the VM. Then you do not need to whitelist IP addresses. This will remove the cyclic dependency.

I have a Terraform that creates a AWS RDS, aurora mysql, is there a way to create a table on the DB

I have a Terraform script that creates a AWS RDS, aurora mysql cluster
module "cluster" {
source = "terraform-aws-modules/rds-aurora/aws"
name = var.cluster_name
master_username = var.master_username
master_password = var.master_password
create_random_password = false
database_name = var.database_name
engine = var.engine
engine_version = var.engine_version
instance_class = var.instance_class_r5
instances = {
one = {}
2 = {
instance_class = var.instance_class_r5_2
}
}
vpc_id = var.vpc_id
subnets = ["subnet-XXXX", "subnet-XXXX", "subnet-XXXX"]
allowed_security_groups = ["sg-XXXXXXXXXXXXXX"]
allowed_cidr_blocks = ["10.20.0.0/20", "144.121.18.66/32"]
storage_encrypted = true
apply_immediately = true
monitoring_interval = 10
db_parameter_group_name = aws_db_parameter_group.credential.id
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.credential.id
publicly_accessible = true
}
resource "aws_db_parameter_group" "credential" {
name = "${var.cluster_name}-aurora-db-57-parameter-group"
family = "aurora-mysql5.7"
description = "${var.cluster_name}-aurora-db-57-parameter-group"
tags = var.tags_required
}
resource "aws_rds_cluster_parameter_group" "credential" {
name = "${var.cluster_name}-aurora-57-cluster-parameter-group"
family = "aurora-mysql5.7"
description = "${var.cluster_name}-aurora-57-cluster-parameter-group"
tags = var.tags_required
}
This creates a database
I am using springboot, and usually with a databse the entity will create the table
#Entity
#Table(name="credential")
public class CredentialEntity {
#Id
#GeneratedValue(strategy = GenerationType.IDENTITY)
Long credentialId;
In my yml file I have set
spring:
hibernate:
ddl-auto: update
But it does not create the table. So is there a way to create the table as part of the terraform script.
I wouldn't recommend doing this, but if you want Terraform to deploy database structures you can try with:
resource "null_resource" "db_setup" {
depends_on = [module.db, aws_security_group.rds_main, aws_default_security_group.default]
provisioner "local-exec" {
command = "mysql --host=${module.cluster.cluster_endpoint} --port=${module.cluster.cluster_port} --user=${module.cluster.cluster_master_username} --password=${module.cluster.cluster_master_password} --database=${module.cluster.cluster_database_name} < ${file(${path.module}/init/db_structure.sql)}"
}
}
(This snippet is based on this answer where you have a lot more examples)
Just note: Terraform manages infrastructure. When AWS provider does its work you can have MySQL provider to pick up and deploy admin stuff like users, roles, grants, etc. But tables within databases belong to application. There are other tools more suited for managing database objects. See if you could plug Flyway or Liquibase into your pipeline.

GCP Cloud SQL failed to delete instance because `deletion_protection` is set to true

I have a tf script for provisioning a Cloud SQL instance, along with a couple of dbs and an admin user. I have renamed the instance, hence a new instance was created but terraform is encountering issues when it comes to deleting the old one.
Error: Error, failed to delete instance because deletion_protection is set to true. Set it to false to proceed with instance deletion
I have tried setting the deletion_protection to false but I keep getting the same error. Is there a way to check which resources need to have the deletion_protection set to false in order to be deleted?
I have only added it to the google_sql_database_instance resource.
My tf script:
// Provision the Cloud SQL Instance
resource "google_sql_database_instance" "instance-master" {
name = "instance-db-${random_id.random_suffix_id.hex}"
region = var.region
database_version = "POSTGRES_12"
project = var.project_id
settings {
availability_type = "REGIONAL"
tier = "db-f1-micro"
activation_policy = "ALWAYS"
disk_type = "PD_SSD"
ip_configuration {
ipv4_enabled = var.is_public ? true : false
private_network = var.network_self_link
require_ssl = true
dynamic "authorized_networks" {
for_each = toset(var.is_public ? [1] : [])
content {
name = "Public Internet"
value = "0.0.0.0/0"
}
}
}
backup_configuration {
enabled = true
}
maintenance_window {
day = 2
hour = 4
update_track = "stable"
}
dynamic "database_flags" {
iterator = flag
for_each = var.database_flags
content {
name = flag.key
value = flag.value
}
}
user_labels = var.default_labels
}
deletion_protection = false
depends_on = [google_service_networking_connection.cloudsql-peering-connection, google_project_service.enable-sqladmin-api]
}
// Provision the databases
resource "google_sql_database" "db" {
name = "orders-placement"
instance = google_sql_database_instance.instance-master.name
project = var.project_id
}
// Provision a super user
resource "google_sql_user" "admin-user" {
name = "admin-user"
instance = google_sql_database_instance.instance-master.name
password = random_password.user-password.result
project = var.project_id
}
// Get latest CA certificate
locals {
furthest_expiration_time = reverse(sort([for k, v in google_sql_database_instance.instance-master.server_ca_cert : v.expiration_time]))[0]
latest_ca_cert = [for v in google_sql_database_instance.instance-master.server_ca_cert : v.cert if v.expiration_time == local.furthest_expiration_time]
}
// Get SSL certificate
resource "google_sql_ssl_cert" "client_cert" {
common_name = "instance-master-client"
instance = google_sql_database_instance.instance-master.name
}
Seems like your code going to recreate this sql-instance. But your current tfstate file contains an instance-code with true value for deletion_protection parameter. In this case, you need first of all change value of this parameter to false manually in tfstate file or by adding deletion_protection = true in the code with running terraform apply command after that (beware: your code shouldn't do a recreation of the instance). And after this manipulations, you can do anything with your SQL instance
You will have to set deletion_protection=false, apply it and then proceed to delete.
As per the documentation
On newer versions of the provider, you must explicitly set deletion_protection=false (and run terraform apply to write the field to state) in order to destroy an instance. It is recommended to not set this field (or set it to true) until you're ready to destroy the instance and its databases.
Link
Editing Terraform state files directly / manually is not recommended
If you added deletion_protection to the google_sql_database_instance after the database instance was created, you need to run terraform apply before running terraform destroy so that deletion_protection is set to false on the database instance.

Terraform - re-use an existing subnetwork to create a cloud sql instance on GCP

I am attempting to create a cloud sql instance on GCP using terraform. I want to use an existing VPC subnetwork created in an earlier step but there does not seem to be a way to refer to it. Instead all examples seem to require a new IP range to be setup. This is my current code that creates the new IP range:
provider = google-beta
project = "project_name"
name = "private_range"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 18
network = "projects/project_name/global/networks/vpc_name"
address = "192.168.128.0"
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider = google-beta
network = "projects/project_name/global/networks/vpc_name"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
resource "google_sql_database_instance" "instance" {
provider = google-beta
project = "project_name"
name = "db-instance10"
region = "us-east1"
database_version = "MYSQL_5_7"
depends_on = [google_service_networking_connection.private_vpc_connection]
settings {
tier = "db-f1-micro"
ip_configuration {
ipv4_enabled = false
private_network = "projects/project_name/global/networks/vpc_name"
}
}
}
provider "google-beta" {
region = "us-east1"
zone = "us-east1-c"
}
When I specify the exact same IP range as the existing subnet. I receive the error:
Error: Error waiting to create GlobalAddress: Error waiting for Creating GlobalAddress: Requested range conflicts with other resources: The provided IP range overlaps with existing subnetwork IP range.
There does not seem to be any obvious way to refer to the existing subnetwork resource as the reserved_peering_ranges parameter only seems to accept the name of an IP address range resource.
Here is the resource specification for the existing subnetwork:
creation_timestamp = "2020-06-03T07:28:05.762-07:00"
enable_flow_logs = true
fingerprint = "ied1TiEZjgc="
gateway_address = "192.168.128.1"
id = "us-east1/vpc_subnet_name"
ip_cidr_range = "192.168.128.0/18"
name = "vpc_subnet_name"
network = "https://www.googleapis.com/compute/v1/projects/project_name/global/networks/vpc_name"
private_ip_google_access = true
project = "project_name"
region = "us-east1"
secondary_ip_range = []
self_link = "https://www.googleapis.com/compute/v1/projects/project_name/regions/us-east1/subnetworks/vpc_subnet_name"
log_config {
aggregation_interval = "INTERVAL_5_SEC"
flow_sampling = 0.5
metadata = "INCLUDE_ALL_METADATA"
}
}
Connecting to a Cloud sql instance through a private IP requires configuring private service access that uses an allocated IP address range that must not overlap with any existing VPC subnet.
The private connection links your VPC network with the service's VPC network. This connection allows VM instances in your VPC network to use internal IP addresses to reach the service resources, for example a Cloud sql instance that has internal IP addresses.
Once created, the allocated IP address range and the connection can then be reused with other services.
you can use the below module to create the cloud sql with exisiting private vpc but you need to modify it according to your network. in this scenario i have created a separate private network & creating the cloud sql using that network.
https://github.com/gruntwork-io/terraform-google-sql
get the existing network in your cloud infra from which you want to create your cloudsql, the below command gives the
gcloud network list --uri
you need to append the network where self link is mentioned & hash out the steps where the vpc is getting created. please refer the below main.tf file
the location of this file is --- Cloud_SQL.terraform\modules\sql_example_postgres-private-ip\examples\postgres-private-ip\main.tf
add the variable accordingly.
# ------------------------------------------------------------------------------
# LAUNCH A POSTGRES CLOUD SQL PRIVATE IP INSTANCE
# ------------------------------------------------------------------------------
# ------------------------------------------------------------------------------
# CONFIGURE OUR GCP CONNECTION
# ------------------------------------------------------------------------------
provider "google-beta" {
project = var.project
region = var.region
}
terraform {
# This module is now only being tested with Terraform 0.14.x. However, to make upgrading easier, we are setting
# 0.12.26 as the minimum version, as that version added support for required_providers with source URLs, making it
# forwards compatible with 0.14.x code.
required_version = ">= 0.12.26"
required_providers {
google-beta = {
source = "hashicorp/google-beta"
version = "~> 3.57.0"
}
}
}
# ------------------------------------------------------------------------------
# CREATE A RANDOM SUFFIX AND PREPARE RESOURCE NAMES
# ------------------------------------------------------------------------------
resource "random_id" "name" {
byte_length = 2
}
####################################################################
# Reserve global internal address range for the peering
resource "google_compute_global_address" "private_ip_address" {
provider = google-beta
# name = local.private_ip_name
name = var.vpc_network
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
# network = google_compute_network.private_network.self_link
# network = google_compute_network.vpc_network.self_link
network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc/"
}
# Establish VPC network peering connection using the reserved address range
resource "google_service_networking_connection" "private_vpc_connection" {
provider = google-beta
# network = google_compute_network.private_network.self_link
network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc"
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
# ------------------------------------------------------------------------------
# CREATE DATABASE INSTANCE WITH PRIVATE IP
# ------------------------------------------------------------------------------
module "postgres" {
# When using these modules in your own templates, you will need to use a Git URL with a ref attribute that pins you
# to a specific version of the modules, such as the following example:
# source = "github.com/gruntwork-io/terraform-google-sql.git//modules/cloud-sql?ref=v0.2.0"
source = "../../modules/cloud-sql"
project = var.project
region = var.region
name = var.instance_name
db_name = var.db_name
engine = var.postgres_version
machine_type = var.machine_type
# To make it easier to test this example, we are disabling deletion protection so we can destroy the databases
# during the tests. By default, we recommend setting deletion_protection to true, to ensure database instances are
# not inadvertently destroyed.
deletion_protection = false
# These together will construct the master_user privileges, i.e.
# 'master_user_name'#'master_user_host' IDENTIFIED BY 'master_user_password'.
# These should typically be set as the environment variable TF_VAR_master_user_password, etc.
# so you don't check these into source control."
master_user_password = var.master_user_password
master_user_name = var.master_user_name
master_user_host = "%"
# Pass the private network link to the module
# private_network = google_compute_network.private_network.self_link
private_network = "https://www.googleapis.com/compute/v1/projects/lucky-operand-312611/global/networks/myprivatevpc"
# Wait for the vpc connection to complete
dependencies = [google_service_networking_connection.private_vpc_connection.network]
custom_labels = {
test-id = "postgres-private-ip-example"
}
}