I have 2 services within ECS Fargate running.
I have set up service discovery with a private dns namespace as all my services are within a private subnet.
When I try and hit my config container from another I am getting the following error.
http://config.qcap-prod:50050/config: Get
"http://config.qcap-prod:50050/config": dial tcp: lookup
config.qcap-prod on 10.0.0.2:53: no such host
Below is my Terraform
resource "aws_service_discovery_service" "config" {
name = "config"
dns_config {
namespace_id = aws_service_discovery_private_dns_namespace.qcap_prod_sd.id
dns_records {
ttl = 10
type = "A"
}
}
health_check_custom_config {
failure_threshold = 1
}
}
Is there another step I need to do to allow me to hit my container from another within ECS using Fargate?
My terraform code for my namespace is:
resource "aws_service_discovery_private_dns_namespace" "qcap_prod_sd" {
name = "qcap.prod"
description = "Qcap prod service discovery"
vpc = module.vpc.vpc_id
}
The fix for this was to add
module "vpc" {
enable_dns_support = true
enable_dns_hostnames = true
}
In the module block within the vpc module to allow the DNS hostnames to be resolved within my VPC
Related
I'm trying to learn wireguard. I found this great tutorial on how to install it on GCP ....
https://sreejithag.medium.com/set-up-wireguard-vpn-with-google-cloud-57bb3267a6ef
Very basic (for somebody new to wireguard) but it did work. The tutorial shows a vm being provisioned with ip forwarding.Through the GCP web interface
I wanted to set this up with terraform. I've searched the terraform registry and found this...
https://registry.terraform.io/providers/hashicorp/google/latest/docs/data-sources/compute_forwarding_rule
Heres's my main.tf with the virtual machine provisioning. Where would I put something like ip forwarding? Without terraform complaining?
code---
# This is the provider used to spin up the gcloud instance
provider "google" {
project = var.project_name
region = var.region_name
zone = var.zone_name
credentials = "mycredentials.json"
}
# Locks the version of Terraform for this particular use case
terraform {
required_version = "0.14.6"
}
# This creates the google instance
resource "google_compute_instance" "vm_instance" {
name = "development-vm"
machine_type = var.machine_size
tags = ["allow-http", "allow-https", "allow-dns", "allow-tor", "allow-ssh", "allow-2277", "allow-mosh", "allow-whois", "allow-openvpn", "allow-wireguard"] # FIREWALL
boot_disk {
initialize_params {
image = var.image_name
size = var.disk_size_gb
}
}
network_interface {
network = "default"
# Associated our public IP address to this instance
access_config {
nat_ip = google_compute_address.static.address
}
}
# We connect to our instance via Terraform and remotely executes our script using SSH
provisioner "remote-exec" {
script = var.script_path
connection {
type = "ssh"
host = google_compute_address.static.address
user = var.username
private_key = file(var.private_key_path)
}
}
}
# We create a public IP address for our google compute instance to utilize
resource "google_compute_address" "static" {
name = "vm-public-address"
}
For WireGuard, you need to enable IP Forwarding. The resource you are trying to use is for HTTP(S) Load Balancers.
Instead enable the google_compute_instance resource attribute can_ip_forward.
can_ip_forward - (Optional) Whether to allow sending and receiving of
packets with non-matching source or destination IPs. This defaults to
false.
can_ip_forward
resource "google_compute_instance" "vm_instance" {
name = "development-vm"
machine_type = var.machine_size
can_ip_forward = true
....
}
We are creating infrastructure on GCP for our application which uses SSL Porxy Load Balancer on GCP. We use Terraform for our deployments and are struggling to create SSL Proxy Load Balancer via terraform.
If anyone could point to a sample code or point me to a direction where I can find some resources to create the load balancer
You can try with the following example:
resource "google_compute_target_ssl_proxy" "default" {
name = "test-proxy"
backend_service = google_compute_backend_service.default.id
ssl_certificates = [google_compute_ssl_certificate.default.id]
}
resource "google_compute_ssl_certificate" "default" {
name = "default-cert"
private_key = file("path/to/private.key")
certificate = file("path/to/certificate.crt")
}
resource "google_compute_backend_service" "default" {
name = "backend-service"
protocol = "SSL"
health_checks = [google_compute_health_check.default.id]
}
resource "google_compute_health_check" "default" {
name = "health-check"
check_interval_sec = 1
timeout_sec = 1
tcp_health_check {
port = "443"
}
}
Take onto consideration that the Health Check is pointed to port 443/tcp, if you want a different port please change that on here.
I have a custom vpc - configured with vpc peering and all. Now I want to deploy a cloudsql instance using terraform under that vpc with only private ip. Can anyone have any suggestions regarding that.
I don't know Terraform well, but in the docs for terraform:
https://www.terraform.io/docs/providers/google/r/sql_database_instance.html
Do a search on "VPC" and it has this snippet:
provider = google-beta
name = "private-network"
}
resource "google_compute_global_address" "private_ip_address" {
provider = google-beta
name = "private-ip-address"
purpose = "VPC_PEERING"
address_type = "INTERNAL"
prefix_length = 16
network = google_compute_network.private_network.self_link
}
resource "google_service_networking_connection" "private_vpc_connection" {
provider = google-beta
network = google_compute_network.private_network.self_link
service = "servicenetworking.googleapis.com"
reserved_peering_ranges = [google_compute_global_address.private_ip_address.name]
}
I believe that's the config bits you need?
I am trying to setup AWS SFTP transfer in vpc endpoint mode but there is one think I can't manage with.
The problem I have is how to get target IPs for NLB target group.
The only output I found:
output "vpc_endpoint_transferserver_network_interface_ids" {
description = "One or more network interfaces for the VPC Endpoint for transferserver"
value = flatten(aws_vpc_endpoint.transfer_server.*.network_interface_ids)
}
gives network interface ids which cannot be used as targets:
Outputs:
api_url = https://12345.execute-api.eu-west-1.amazonaws.com/prod
vpc_endpoint_transferserver_network_interface_ids = [
"eni-12345",
"eni-67890",
"eni-abcde",
]
I went through:
terraform get subnet integration ips from vpc endpoint subnets tab
and
Terraform how to get IP address of aws_lb
but none of them seems to be working. The latter says:
on modules/sftp/main.tf line 134, in data "aws_network_interface" "ifs":
134: count = "${length(local.nlb_interface_ids)}"
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
You can create an Elastic IP
resource "aws_eip" "lb" {
instance = "${aws_instance.web.id}"
vpc = true
}
Then specify the Elastic IPs while creating Network LB
resource "aws_lb" "example" {
name = "example"
load_balancer_type = "network"
subnet_mapping {
subnet_id = "${aws_subnet.example1.id}"
allocation_id = "${aws_eip.example1.id}"
}
subnet_mapping {
subnet_id = "${aws_subnet.example2.id}"
allocation_id = "${aws_eip.example2.id}"
}
}
Terraform doesn't seem to be able to create AWS private hosted Route53 zones, and dies with the following error when I try to create a new hosted private zone associated with an existing VPC:
Error applying plan:
1 error(s) occurred:
aws_route53_zone.analytics: InvalidVPCId: The VPC: vpc-xxxxxxx you provided is not authorized to make the association.
status code: 400, request id: b411af23-0187-11e7-82e3-df8a3528194f
Here's my .tf file:
provider "aws" {
region = "${var.region}"
profile = "${var.environment}"
}
variable "vpcid" {
default = "vpc-xxxxxx"
}
variable "region" {
default = "eu-west-1"
}
variable "environment" {
default = "dev"
}
resource "aws_route53_zone" "analytics" {
vpc_id = "${var.vpcid}"
name = "data.int.example.com"
}
I'm not sure if the error is referring to either one of these:
VPC somehow needs to be authorised to associate with the Zone in advance.
The aws account running the terraform needs correct IAM permissions to associate the zone with the vpc
Would anyone have a clue how I could troubleshoot this further?
some times you also face such issue when the aws region which is configured in provider config is different then the region in which you have VPC deployed. for such cases we can use alias for aws provider. like below:
provider "aws" {
region = "us-east-1"
}
provider "aws" {
region = "ap-southeast-1"
alias = "singapore"
}
then we can use it as below in terraform resources:
resource "aws_route53_zone_association" "vpc_two" {
provider = "aws.singapore"
zone_id = "${aws_route53_zone.dlos_vpc.zone_id}"
vpc_id = "${aws_vpc.vpc_two.id}"
}
above snippet is helpful when you need your terraform script to do deployment in multiple regions.
check the terraform version if run with latest or not.
Second, your codes are wrong if compare with the sample
data "aws_route53_zone" "selected" {
name = "test.com."
private_zone = true
}
resource "aws_route53_record" "www" {
zone_id = "${data.aws_route53_zone.selected.zone_id}"
name = "www.${data.aws_route53_zone.selected.name}"
type = "A"
ttl = "300"
records = ["10.0.0.1"]
}
The error code you're getting is because either your user/role doesn't have the necessary VPC related permissions or you are using the wrong VPC id.
I'd suggest you double check the VPC id you are using, potentially using the VPC data source to fetch it:
# Assuming you use the "Name" tag on the VPC resource to identify your VPCs
variable "vpc_name" {}
data "aws_vpc" "selected" {
tags {
Name = "${var.vpc_name}"
}
}
resource "aws_route53_zone" "analytics" {
vpc_id = "${data.aws_vpc.selected.id}"
name = "data.int.example.com"
}
You also want to check that your user/role has the necessary VPC related permissions. For this you'll probably want all of the permissions listed in the docs: