My NEGs are not connecting to my cloud run functions - google-cloud-platform

https://cloud.google.com/load-balancing/docs/https/setting-up-https-serverless#setting_up_regional_routing
I setup a GCE global load balancer and NEGs. What is unclear to me is how a NEG connects to a cloud run app. It looks like the Service name of a NEG just needs to match a corresponding cloud run app name.
I have done this but it appears it's not connected. I can't even find in the docs how to troubleshoot this linkage.
Created a neg via Terraform
resource "google_compute_region_network_endpoint_group" "neg" {
network_endpoint_type = "SERVERLESS"
region = us-east1
cloud_run {
service = "my-cloudrun-app"
}
}
Then deployed a cloud run app
gcloud run deploy my-cloudrun-app --region us-east1
My understanding is if the cloud run app name matches the service name it should connect to it. I can see the NEGs are connected to my GCE load balancer and the cloud run app was deployed successfully, but the NEG doesn't appear to be routing to my function.

I'm using this official GCP module to hook this up (it actually does make it pretty easy!) https://github.com/terraform-google-modules/terraform-google-lb-http/tree/v6.0.1/modules/serverless_negs
I found it does work the way I expected it to, the issue was just that I didn't have a cloud run app behind one of the regional NEGs I created (I thought I did). I actually created several regional NEGs, made kind of a mess, and the regional NEG the LB was routing my traffic to didn't have a corresponding cloud run app it pointed to.
How I was able to troubleshoot this:
Find the backend the load balancer was configured with
In GCP console I was able to view the backend and all the regional NEGs configured for it
Hit refresh/curl a bunch of times and saw in the gcp console on the backend's page one of the regional NEGs was actually receiving traffic- so I was at least able to see which NEG my traffic was being routed to
Realized I didn't deploy a cloud run app with a name that regional NEG was configured for
I still feel like visibility into how all these components play together could be better, but the traffic flow diagram for the Backend service details page was a life saver!

I don't know if you did that, but you need more to deploy your neg on a load balancer. Here the missing pieces
resource "google_compute_managed_ssl_certificate" "default" {
name = "cert"
managed {
domains = ["${var.domain}"]
}
}
resource "google_compute_backend_service" "default" {
name = "app-backend"
protocol = "HTTP"
port_name = "http"
timeout_sec = 30
backend {
group = google_compute_region_network_endpoint_group.neg.id
}
}
resource "google_compute_url_map" "default" {
name = "app-urlmap"
default_service = google_compute_backend_service.default.id
}
resource "google_compute_target_https_proxy" "default" {
name = "https-proxy"
url_map = google_compute_url_map.default.id
ssl_certificates = [
google_compute_managed_ssl_certificate.default.id
]
}
resource "google_compute_global_forwarding_rule" "default" {
name = "lb"
target = google_compute_target_https_proxy.default.id
port_range = "443"
ip_address = google_compute_global_address.default.address
}
resource "google_compute_global_address" "default" {
name = "address"
}
Easy?? Absolutely not. Let me know if you need more details, guidances or explanation.

Related

Elastic Beanstalk Load Balancer Logging in Terraform

I am trying to create a Elastic Beanstalk in AWS, and need to enable access logs for the load balancer that the Beanstalk would create. I could not find any examples on the Terraform official documention where I could enable this feature via Terraform code
resource "aws_elastic_beanstalk_application" "tftest" {
name = "tf-test-name"
description = "tf-test-desc"
}
resource "aws_elastic_beanstalk_environment" "tfenvtest" {
name = "tf-test-name"
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
}
I am trying to enable access logs for the load balancer created by Beanstalk but there is no mention of such feature in Terraform documentation.
You need to use option settings for Elastic Beanstalk [1]:
resource "aws_elastic_beanstalk_environment" "some_env" {
name = "tf-test-name"
application = aws_elastic_beanstalk_application.tftest.name
solution_stack_name = "64bit Amazon Linux 2015.03 v2.0.3 running Go 1.4"
setting {
namespace = "aws:elbv2:loadbalancer"
name = "AccessLogsS3Bucket"
value = "<valid S3 bucket name>"
}
setting {
namespace = "aws:elbv2:loadbalancer"
name = "AccessLogsS3Enabled"
value = "true"
}
}
Using the same logic, you can optionally define the AccessLogsS3Prefix setting, but it is not required.
[1] https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elbloadbalancer

Azure synapse studio connectivity using Private end point

I am trying to setup a secure connection to Azure synapse studio using private link hub and private endpoint as mentioned in the below doc,
https://learn.microsoft.com/en-us/azure/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network
However, it throws an error.
"Unable to connect to serverless SQL pool because of a
network/firewall issue"
Please note- We use a VPN to connect to on-premise company network from home and access the dedicated pool using a SQL authentication. This works absolutely fine.
The private endpoint and link hub are mounted on the same subnet as the one we use for dedicated pool. So I don't think there is any problem with allowing certain ports for serverless pool. Please correct me.
What am I missing here?
Unable to connect to serverless SQL pool because of a network/firewall
issue ?
Synapse Studio
Follow this instruction for troubleshooting network and firewall:
When creating your workspace, managed virtual network should be enabled and make sure to allow all IP addresses.
Note: If you do not enable it, your synapse studio will not be able to create a private endpoint. So if you fail to do this during synapse
workspace creation. You will not be able to change this later and you
will be a force to recreate the synapse workspace.
Create a Managed private endpoint, connect to your data source and check whether the managed private endpoint is approved or not.
To know more details please Refer this link:
https://www.thedataguy.blog/azure-synapse-understanding-private-endpoints/
https://www.c-sharpcorner.com/article/how-to-setup-azure-synapse-analytics-with-private-endpoint/
How to set up access control for your Azure Synapse workspace - Azure Synapse Analytics | Microsoft Docs
This is what resolved it for me. Hope this helps someone out there.
Used Terraform for_each loop to deploy the Private Endpoints. The Synapse Workspace is using a Managed Private Network. In order to disable Public Network Access, the Private Link Hub, plus the 3 Synapse-specific endpoints (for the sub-resources) are required.
Pre-Reqs:
Private DNS Zones need to exist
Private Link Hub (deployed via TF in same resource group as the Synapse Workspace)
main.tf
# Loop through Synapse subresource names, and create Private Endpoints to each of them
resource "azurerm_private_endpoint" "this" {
for_each = var.endpoints
name = lower("pep-syn-${var.location}-${var.environment}-${each.value["alias"]}")
location = var.location
resource_group_name = var.resource_group_name
subnet_id = data.azurerm_subnet.subnet.id
private_service_connection {
name = lower("pep-syn-${var.location}-${var.environment}-${each.value["alias"]}")
private_connection_resource_id = ("${each.key}" == "web") ? azurerm_synapse_private_link_hub.this.id : azurerm_synapse_workspace.this.id
subresource_names = ["${each.key}"]
is_manual_connection = false
}
private_dns_zone_group {
name = "${each.value["source_dns_zone_group_name"]}"
private_dns_zone_ids = [var.private_dns_zone_config[each.value["source_dns_zone_group_name"]]]
}
tags = var.tags
lifecycle {
ignore_changes = [
tags
]
}
}
variables.tf
variable "endpoints" {
description = "Private Endpoint Connections required. 'web' (case-sensitive) is for the Workspace to the Private Link Hub, and Sql/Dev/SqlOnDemand (case-sensitive) are from the Synapse workspace"
type = map(map(string))
default = {
"Dev" = {
alias = "studio"
source_dns_zone_group_name = "privatelink_dev_azuresynapse_net"
}
"Sql" = {
alias = "sqlpool"
source_dns_zone_group_name = "privatelink_sql_azuresynapse_net"
}
"SqlOnDemand" = {
alias = "sqlondemand"
source_dns_zone_group_name = "privatelink_sql_azuresynapse_net"
}
"web" = {
alias = "pvtlinkhub"
source_dns_zone_group_name = "privatelink_azuresynapse_net"
}
}
}
Appendix:
https://learn.microsoft.com/en-us/azure/synapse-analytics/security/how-to-connect-to-workspace-from-restricted-network#step-4-create-private-endpoints-for-your-workspace-resource
https://learn.microsoft.com/en-gb/azure/private-link/private-endpoint-overview#private-link-resource

Exposing a ECS Service to the net

I have created a ECS cluster and created a number of services. But I want one of the services be accessed to the outside world. That service will then interact with the other services.
Created an ECS cluster
Created services.
Created the apps loaded into a docker container.
I updated the security group to allow outside access
But under network interfaces on my console I cant find any reference to my security group I created. The security groups created are there.
resource "aws_ecs_service" "my_service" {
name = "my_service"
cluster = aws_ecs_cluster.fetcher_service.id
task_definition = "${aws_ecs_task_definition.my_service.family}:${max(aws_ecs_task_definition.my_service.revision, data.aws_ecs_task_definition.my_service.revision)}"
desired_count = 0
network_configuration {
subnets = var.vpc_subnet_ids
security_groups = var.zuul_my_group_ids
assign_public_ip = true
}
}
Am I missing any steps
If desired count is set to 0, probably no containers will be spun up in the first place and no network interfaces will be allocated. Maybe that's the issue.
Set the desires count to something larger than zero to test this.
Thank you tp LRuttens answer. I set desired count to 1. and under network instances I see a network associated with my securitygroup for that ECS service,

Terraform - how to use the same load balancer between multiple AWS ecs_service resources?

I had a question about creating a service on AWS ECS using Terraform, and would appreciate any and all feedback, especially since I'm an AWS newbie.
I have several services in the same cluster (each service is a machine learning model). The traffic isn't that high, so I would like the same load balancer to route requests to the different services (based on a request header which specifies the model to use).
I was trying to create the services using Terraform (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service) but I'm having a hard time understanding the load_balancer configuration. There is no option to choose the ARN or ID of a specific load balancer, which makes me think that a separate Load Balancer is created for each service - and that sounds expensive :)
Has anyone had any experience with this, who can tell me what is wrong with my reasoning?
Thanks a lot for reading!
Fred, in the link to the documentation you've posted is the answer, let me walk you through it.
Here is how two ECS services can use a one Application Load Balancer Graphically:
The below scenario describes the configuration for one of the services, it is analogous for a second one, the only thing you wouldn't need to repeat is the Load Balancer declaration.
You can define the following:
# First let's define the Application LB
resource "aws_lb" "unique" {
name = "unique-lb"
internal = false
load_balancer_type = "application"
... #the rest of the config goes here
}
#Now let's create the target group for the service one
resource "aws_lb_target_group" "serviceonetg" {
name = "tg-for-service-one"
port = 8080 #example value
protocol = "HTTP"
... #the rest of the config goes here
}
#Now create the link between the LB and the Target Group
# also will add a rule when to forward the traffic using HTTP path /serviceone
resource "aws_alb_listener" "alb_serviceone_listener" {
load_balancer_arn = aws_alb.unique.arn # Here is the LB ARN
port = 80
protocol = "HTTP"
default_action {
target_group_arn = "${aws_alb_target_group.serviceonetg.arn}" #Here is the TG ARN
type = "forward"
}
condition {
field = "path-pattern"
values = ["/serviceone"]
}
}
#As a last step, you need to link your service with the target group.
resource "aws_ecs_service" "service_one" {
... # prior configuration goes here
load_balancer {
target_group_arn = aws_lb_target_group.serviceonetg.arn # Here you will link the service with the TG
container_name = "myservice1"
container_port = 8080
}
... #the rest of the config goes here
}
As a side note, I would template the repeating part for the services using data structures in a way you can use count or for_each to describe Target Group, Listeners and Services only once and leaving templating engine do the rest. Basically, follow the DRY principle.
I hope this can help you.

Cannot deploy public api on Cloud Run using Terraform

Terraform now supports cloud run as documented here,
and I'm trying the example code below.
resource "google_cloud_run_service" "default" {
name = "tftest-cloudrun"
location = "us-central1"
provider = "google-beta"
metadata {
namespace = "my-project-name"
}
spec {
containers {
image = "gcr.io/cloudrun/hello"
}
}
}
Although it deploys the sample hello service with no error, when I access to the auto-generated URL, it returns 403(Forbidden) response.
Is it possible to create public cloud run api using terraform?
(When I'm creating the same service using GUI, GCP provides "Allow unauthenticated invocations" option under "Authentication" section, but there seems to be no equivalent option in terraform document...)
Just add the following code to your terraform script, which will make it publicly accessable
data "google_iam_policy" "noauth" {
binding {
role = "roles/run.invoker"
members = [
"allUsers",
]
}
}
resource "google_cloud_run_service_iam_policy" "noauth" {
location = google_cloud_run_service.default.location
project = google_cloud_run_service.default.project
service = google_cloud_run_service.default.name
policy_data = data.google_iam_policy.noauth.policy_data
}
You can also find this here
Here the deployment is only based on Knative serving spec. Cloud Run managed implements these specs but have its own internal behavior, like role check linked with IAM (not possible with Knative and a K8S cluster, this is replaced by Private/Public service). The namespace on Cloud Run managed is the projectId, a workaround to identify the project for example, not a real K8S namespace.
So, the latest news that I have from Google (I'm Cloud Run Alpha Tester) which tells they are working with Deployment Manager and Terraform for integrating Cloud Run in them. I don't have deadline, sorry.