How to restrict access to an external load balancer's endpoint - google-cloud-platform

I am very new to GCP. For now I have deployed a hello-world container in GKE. This hello-world is backed by an external load balancer, meaning that it is accessible to everyone on the internet provided they have its IP address.
I would like to restrict the access to this endpoint only to authenticated users (through Google SSO) that are part of my project or my organization. Is there a way to do so?

You need to integrate IAP ( Identity-Aware Proxy )
When to use IAP
Use IAP when you want to enforce access control policies for applications and resources. IAP works with signed headers or the App Engine standard environment Users API to secure your app. With IAP, you can set up group-based application access: a resource could be accessible for employees and inaccessible for contractors, or only accessible to a specific department.
Enabling IAP for GKE
IAP is integrated through Ingress for GKE. This integration enables you to control resource-level access for employees instead of using a VPN.
In a GKE cluster, incoming traffic is handled by HTTP(S) Load Balancing, a component of Cloud Load Balancing. The HTTP(S) load balancer is typically configured by the Kubernetes Ingress controller. The Ingress controller gets configuration information from a Kubernetes Ingress object that is associated with one or more Service objects. Each Service object holds routing information that is used to direct an incoming request to a particular Pod and port.
Beginning with Kubernetes version 1.10.5-gke.3, you can add configuration for the load balancer by associating a Service with a BackendConfig object. BackendConfig is a custom resource definition (CRD) that is defined in the kubernetes/ingress-gce repository.
The Kubernetes Ingress controller reads configuration information from the BackendConfig and sets up the load balancer accordingly. A BackendConfig holds configuration information that is specific to Cloud Load Balancing, and enables you to define a separate configuration for each HTTP(S) Load Balancing backend service.

Related

is it possible to create a AWS API Gateway REST API integration with a internal load balancer?

I have an ECS cluster with all private IPs and an internal application load balancer. I can access my application load balancer via VPN, but I want to add an API Gateway REST API and serve these APIs publicly. However, there is no option for an application load balancer in the REST API VPC Link section. I am wondering if this is possible?
Amazon API Gateway REST API only supports Network Load Balancer (NLB) for private integrations via VPCLink.
One option in this situation is to create an NLB with the existing ALB as a target group (note that this will incur additional cost). Connect the API Gateway REST API to the NLB via VPCLink.
The following references might be useful:
https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-nlb-for-vpclink-using-console.html
https://aws.amazon.com/blogs/networking-and-content-delivery/application-load-balancer-type-target-group-for-network-load-balancer/

How to use Identity Aware Proxy (IAP) for protecting certain paths for Cloud Load Balancer

I'm using google Cloud Load Balancer for exposing dashboard. This load balancer was created by GKE ingress. I want to restrict certain endpoints with specific pattern to be accessible with organization email. eg. */internal/*.
I've tried to use IAP but it's restricting all URLs.

GKE Multicluster Ingress for private clusters with on-premise hardware load balancers

It appears that istio based or MultiCluster Ingress on GKE will only work with External Load balancers. We have a need(due to a regulatory limitation) that allows external traffic ingress only via an on premise hardware load balancer, this traffic must then be routed to GCP via partner interconnect.
With such a setup - supposing we create an a static IP with an internal load balancer on GCP - Will a MultiCluster Ingress work with this Internal load balancer?
Alternatively, how would you design a solution if you have multiple GKE clusters that need to be load balanced with this type of on-premise hardware LB type ingress?
MultiCluster Ingress only supports Public LoadBalancers.
You can use the new Gateway API with the gke-l7-rilb-mc Gateway Class, this should deploy an internal Multi cluster L7 LoadBalancer. You can route external traffic thought you onprem F5 and via Partner Interconnect to GCP to the LoadBalancer provisioned via the Gateway API
Just keep in mind that the Gateway API and controller are still in preview for now, they should be GA by the eoy https://cloud.google.com/kubernetes-engine/docs/how-to/deploying-multi-cluster-gateways#blue-green

Which port gke https loadbalancer use for health checks?

Please I want to know which port GKE uses when performing the health checks of the backend services.
Does it use the service port declared in the service yaml or other specific ports? Because I'm having trouble getting the back services healthy.
Google Cloud has special routes for the load balancers and their associated health checks.
Routes that facilitate communication between Google Cloud health check probe systems and your backend VMs exist outside your VPC network, and cannot be removed. However, your VPC network must have ingress allow firewall rules to permit traffic from these systems.
For health checks to work you must create ingress allow firewall rules so that traffic from Google Cloud probers can connect to your backends. You can refer to this documentation.

AWS ECS Private and Public Services

I have a scenario where I have to deploy multiple micro-services on AWS ECS. I want to make services able to communicate with each other via APIs developed in each micro-service. I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS. How can I achieve this? Can I use AWS ECS service discovery by having all services in a private subnet to enable communication between each of them? Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
The combination of both AWS load balancer ( for public access) and Amazon ECS Service Discovery ( for internal communication) is the perfect choice for the web application.
Built-in service discovery in ECS is another feature that makes it
easy to develop a dynamic container environment without needing to
manage as many resources outside of your application. ECS and Route 53
combine to provide highly available, fully managed, and secure service
discovery
Service discovery is a technique for getting traffic from one container to another using the containers direct IP address, instead of an intermediary like a load balancer. It is suitable for a variety of use cases:
Private, internal service discovery
Low latency communication between services
Long lived bidirectional connections, such as gRPC.
Yes, you can use AWS ECS service discovery having all services in a private subnet to enable communication between them.
This makes it possible for an ECS service to automatically register
itself with a predictable and friendly DNS name in Amazon Route 53. As
your services scale up or down in response to load or container
health, the Route 53 hosted zone is kept up to date, allowing other
services to lookup where they need to make connections based on the
state of each service.
Yes, you can use Load Balancer to make front-end micro-service accessible to end-users over the internet. You can look into this diagram that shows AWS LB and service discovery for a Web application in ECS.
You can see the backend container which is in private subnet, serve public request through ALB while rest of the container use AWS service discovery.
Amazon ECS Service Discovery
Let’s launch an application with service discovery! First, I’ll create
two task definitions: “flask-backend” and “flask-worker”. Both are
simple AWS Fargate tasks with a single container serving HTTP
requests. I’ll have flask-backend ask worker.corp to do some work and
I’ll return the response as well as the address Route 53 returned for
worker. Something like the code below:
#app.route("/")
namespace = os.getenv("namespace")
worker_host = "worker" + namespace
def backend():
r = requests.get("http://"+worker_host)
worker = socket.gethostbyname(worker_host)
return "Worker Message: {]\nFrom: {}".format(r.content, worker)
Note that in this private architecture there is no public subnet, just a private subnet. Containers inside the subnet can communicate to each other using their internal IP addresses. But they need some way to discover each other’s IP address.
AWS service discovery offers two approaches:
DNS based (Route 53 create and maintains a custom DNS name which
resolves to one or more IP addresses of other containers, for
example, http://nginx.service.production Then other containers can
send traffic to the destination by just opening a connection using
this DNS name)
API based (Containers can query an API to get the list of IP address
targets available, and then open a connection directly to one of the
other container.)
You can read more about AWS service discovery and use cases amazon-ecs-service-discovery and here
According to the documentation, "Amazon ECS does not support registering services into public DNS namespaces"
In other words, when it registers the DNS, it only uses the service's private IP address which would likely be problematic. The DNS for the "public" services would register to the private IP addresses which would only work, for example, if you were on a VPN to the private network, regardless of what your subnet rules were.
I think a better solution is to attach the services to one of two load balancers... one internet facing, and one internal. I think this works more naturally for scaling the services up anyway. Service discovery is cool, but really more for services talking to each other, not for external clients.
I want to deploy the front-end on AWS ECS as well that can be accessed publicly and can also communicate with other micro-services deployed on AWS ECS.
I would use Service Discovery to wire the services internally and the Elastic Load Balancer integration to make them accessible for the public.
The load balancer can do the load balancing on one side and the DNS SRV records can do the load balancing for your APIs internally.
There is a similar question here on Stack Overflow and the answer [1] to it outlines a possible solution using the load balancer and the service discovery integrations in ECS.
Can I use Elastic Load Balancer to make front-end micro-service accessible to end-users over the internet only via HTTP/HTTPS protocols while keeping it in a private subnet?
Yes, the load balancer can register targets in a private subnet.
References
[1] https://stackoverflow.com/a/57137451/10473469