Retrieve the client certificates from Cloud SQL using terraform - google-cloud-platform

In order to enforce SSL connections, we specify require_ssl = "true" under the settings.ip_configuration block.
We can get the server certificates as such:
output "servercertificate" {
value = "${google_sql_database_instance.master.server_ca_cert.0.cert}"
}
How do we get/specify the client certificates and the client key for an instance?

With release 1.20.0, one can retreive client certificates using the google_sql_ssl_cert resource. Here is an example showcasing how to use it:
resource "google_sql_ssl_cert" "client_cert" {
depends_on = ["google_sql_database_instance.master", "google_sql_database.database", "google_sql_user.user"]
common_name = "terraform-generated"
instance = "${google_sql_database_instance.master.name}"
}
The attributes associated with this resource are outlined here
In order to use the latest terraform provider run terraform init -upgrade

Related

Terraform resets credentials when importing existing RDS db resource

We have a bunch of existing resources on AWS that we want to import under terraform managment. One of these resources is an RDS db. So we wrote something like this :
resource "aws_db_instance" "db" {
engine = "postgres"
username = var.rds_username
password = var.rds_password
# other stuff...
}
variable "rds_username" {
type = string
default = "master_username"
}
variable "rds_password" {
type = string
default = "master_password"
sensitive = true
}
Note these are the existing master credentials. That's important. Then we did this:
terraform import aws_db_instance.db db-identifier
And then tried terraform plan a few times, tweaking the code to fit the existing resource until finally, terraform plan indicated there were no changes to be made (which was the goal).
However, once we ran terraform apply, it reset the master credentials of the DB instance. Worse, other resources that had previously connected to that DB using these exact credentials suddenly can't anymore.
Why is this happening? Is terraform encrypting the password behind the scenes and not telling me? Why can't other services connect to the DB using the same credentials? Is there any way to import an RDS instance without resetting credentials?

Getting OpenIDConnect provider's HTTPS certificate with Gitlab

I am facing an issue while accessing AWS resources with the OIDC provider from GitLab CICD.
OIDC configured successfully.
I am creating it with below terraform code,
data "tls_certificate" "gitlab" {
url = var.mygitlab
}
resource "aws_iam_openid_connect_provider" "gitlab" {
url = var.mygitlab
client_id_list = [var.mygitlab_aud_value]
thumbprint_list = [data.tls_certificate.gitlab.certificates.0.sha1_fingerprint]
}
when I am creating it via aws console manual it is working fine. so please guide how we can create proper with terraform.
I have generate manual fingerprint in aws console and pass as variable in terraform code and it is resolved.

Azure synapse studio connectivity using Private end point

I am trying to setup a secure connection to Azure synapse studio using private link hub and private endpoint as mentioned in the below doc,
https://learn.microsoft.com/en-us/azure/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network
However, it throws an error.
"Unable to connect to serverless SQL pool because of a
network/firewall issue"
Please note- We use a VPN to connect to on-premise company network from home and access the dedicated pool using a SQL authentication. This works absolutely fine.
The private endpoint and link hub are mounted on the same subnet as the one we use for dedicated pool. So I don't think there is any problem with allowing certain ports for serverless pool. Please correct me.
What am I missing here?
Unable to connect to serverless SQL pool because of a network/firewall
issue ?
Synapse Studio
Follow this instruction for troubleshooting network and firewall:
When creating your workspace, managed virtual network should be enabled and make sure to allow all IP addresses.
Note: If you do not enable it, your synapse studio will not be able to create a private endpoint. So if you fail to do this during synapse
workspace creation. You will not be able to change this later and you
will be a force to recreate the synapse workspace.
Create a Managed private endpoint, connect to your data source and check whether the managed private endpoint is approved or not.
To know more details please Refer this link:
https://www.thedataguy.blog/azure-synapse-understanding-private-endpoints/
https://www.c-sharpcorner.com/article/how-to-setup-azure-synapse-analytics-with-private-endpoint/
How to set up access control for your Azure Synapse workspace - Azure Synapse Analytics | Microsoft Docs
This is what resolved it for me. Hope this helps someone out there.
Used Terraform for_each loop to deploy the Private Endpoints. The Synapse Workspace is using a Managed Private Network. In order to disable Public Network Access, the Private Link Hub, plus the 3 Synapse-specific endpoints (for the sub-resources) are required.
Pre-Reqs:
Private DNS Zones need to exist
Private Link Hub (deployed via TF in same resource group as the Synapse Workspace)
main.tf
# Loop through Synapse subresource names, and create Private Endpoints to each of them
resource "azurerm_private_endpoint" "this" {
for_each = var.endpoints
name = lower("pep-syn-${var.location}-${var.environment}-${each.value["alias"]}")
location = var.location
resource_group_name = var.resource_group_name
subnet_id = data.azurerm_subnet.subnet.id
private_service_connection {
name = lower("pep-syn-${var.location}-${var.environment}-${each.value["alias"]}")
private_connection_resource_id = ("${each.key}" == "web") ? azurerm_synapse_private_link_hub.this.id : azurerm_synapse_workspace.this.id
subresource_names = ["${each.key}"]
is_manual_connection = false
}
private_dns_zone_group {
name = "${each.value["source_dns_zone_group_name"]}"
private_dns_zone_ids = [var.private_dns_zone_config[each.value["source_dns_zone_group_name"]]]
}
tags = var.tags
lifecycle {
ignore_changes = [
tags
]
}
}
variables.tf
variable "endpoints" {
description = "Private Endpoint Connections required. 'web' (case-sensitive) is for the Workspace to the Private Link Hub, and Sql/Dev/SqlOnDemand (case-sensitive) are from the Synapse workspace"
type = map(map(string))
default = {
"Dev" = {
alias = "studio"
source_dns_zone_group_name = "privatelink_dev_azuresynapse_net"
}
"Sql" = {
alias = "sqlpool"
source_dns_zone_group_name = "privatelink_sql_azuresynapse_net"
}
"SqlOnDemand" = {
alias = "sqlondemand"
source_dns_zone_group_name = "privatelink_sql_azuresynapse_net"
}
"web" = {
alias = "pvtlinkhub"
source_dns_zone_group_name = "privatelink_azuresynapse_net"
}
}
}
Appendix:
https://learn.microsoft.com/en-us/azure/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network#step-4-create-private-endpoints-for-your-workspace-resource
https://learn.microsoft.com/en-gb/azure/private-link/private-endpoint-overview#private-link-resource

Does A AWS Transfer Family API Gateway Identity Provider Need To Use Terraforms `aws_apigatewayv2_api` or ` aws_api_gateway_rest_api`?

I'm looking at the Terraform docks for setting up AWS Transfer Family and I see this example
resource "aws_transfer_server" "example" {
endpoint_type = "VPC"
endpoint_details {
subnet_ids = [aws_subnet.example.id]
vpc_id = aws_vpc.example.id
}
protocols = ["FTP", "FTPS"]
certificate = aws_acm_certificate.example.arn
identity_provider_type = "API_GATEWAY"
url = "${aws_api_gateway_deployment.example.invoke_url}${aws_api_gateway_resource.example.path}"
}
I then look at the terraform docs for API Gateway and I see two different docs for implementing an API Gateway. Which Terraform API should I use to build this integration?
According to the custom identity provider documentation AWS Transfer Family expects a REST endpoint and therefore you should use aws_api_gateway_rest_api.

Cannot deploy public api on Cloud Run using Terraform

Terraform now supports cloud run as documented here,
and I'm trying the example code below.
resource "google_cloud_run_service" "default" {
name = "tftest-cloudrun"
location = "us-central1"
provider = "google-beta"
metadata {
namespace = "my-project-name"
}
spec {
containers {
image = "gcr.io/cloudrun/hello"
}
}
}
Although it deploys the sample hello service with no error, when I access to the auto-generated URL, it returns 403(Forbidden) response.
Is it possible to create public cloud run api using terraform?
(When I'm creating the same service using GUI, GCP provides "Allow unauthenticated invocations" option under "Authentication" section, but there seems to be no equivalent option in terraform document...)
Just add the following code to your terraform script, which will make it publicly accessable
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
You can also find this here
Here the deployment is only based on Knative serving spec. Cloud Run managed implements these specs but have its own internal behavior, like role check linked with IAM (not possible with Knative and a K8S cluster, this is replaced by Private/Public service). The namespace on Cloud Run managed is the projectId, a workaround to identify the project for example, not a real K8S namespace.
So, the latest news that I have from Google (I'm Cloud Run Alpha Tester) which tells they are working with Deployment Manager and Terraform for integrating Cloud Run in them. I don't have deadline, sorry.