Do GCP Internal Load Balancers support gRPC with Serverless Negs - google-cloud-platform

I am running a number of Cloud Run services which all have VPC access via a VPC connector and setting all egress to run through this connector. I have an ILB set up which points to a Regional Backend Service with Serverless Network Endpoint Group type. When you select this type you are unable to choose the protocol for the service (HTTP, HTTPS, HTTP/2)
The receiving Cloud Run is set to ingress unauthenticated and to allow internal/cloud-load-balancing.
When my client tries to send messages to my server via an address that resolves to the ILB it fails with a very non-descript error: rpc error: code = Unknown desc =.
I have tried using the direct cloud run url as opposed to going via my ILB and this does work. I would prefer to use my internal DNS though if possible.

Related

ERROR: Access is forbidden when trying cloud run service-to-service communication

I'm trying to implement cloud-run services to service communication.
Aim: service A (frontend) need to call service B (content-api) which is connected to cloud SQL DB.
Implemented using official doc - https://cloud.google.com/run/docs/authenticating/service-to-service
My present setup is as below
Frontend service config
Created a new service account and attached it.
Created a serverless VPC connector in the host project and configured it with all traffic through this connector.
Ingress is set to allow all traffic
Authentication is set to allow unauthenticated invocations
Content-api config
Create another new service account and attached it.
Used the same serverless vpc access connector which is in the host project and configured with all traffic through this connector.
Ingress is set to allow internal traffic only.
Authentication is set to required authentication (frontend service code is fetching token from metadata server and is able to connect using that token)
Also configured cloud run invoker role for frontend service account principle in content-api (show info panel settings).
Expecting to get data from content-api when frontend service is triggered.
I'm able to trigger frontend service but getting access forbidden error (guessing due to content-api is set to allow internal ingress only ). But when I change that content-api ingress setting to allow all traffic. It Is working fine - requesting a token and using that to call content-api and which queries DB and responds with the expected value.
what could be the cause for the internal setting error ( Access Forbidden )? And how to resolve this? Thanks in advance for your answers/suggestions.

How to allow calls to Cognito from an AWS ECS container instance?

I have a setup with an ALB and a target group created by ECS, I'm using Fargate and created a build pipeline by following this article. My app is built with NET core, I have an Angular frontend. Got all this working, I'm able to deploy my code changes, but I'm a bit stuck with the following issue.
I'm using Cognito for authentication and a custom domain that I set for the hosted UI. It seems that, from the browser, when I try to hit an endpoint that is secured, I get a 504 Gateway error, which somehow is not doing the redirect to Cognito in the browser. All this works fine when I run the application on localhost.
When I looked at the logs, I noticed the following exception:
System.InvalidOperationException: IDX20803: Unable to obtain configuration from: 'https://cognito-idp.<region>.amazonaws.com/<region_and_a_code>/.well-known/openid-configuration'
Apparently, it can't establish a connection to Cognito. My containers are using only port 80, my target group instances are also using port 80, ALB uses HTTPS on 443 which directs the traffic to the target group, and for ALB port 80 I just redirect to 443.
I tried a few different things, like setting the authority value instead of the metadata address, tried using a BackChannelHttpHandler to execute the HTTPS call, tried updating the port mappings to allow communication on 443, but somehow it seems that it gets overridden by the task definition that I have created when I set up the build pipeline. The network mode in my task definition is now awsvpc, and if I try to set it to host, it will complain that I can't use it with Fargate.
What do I need to do to allow the HTTPS request from my Docker container instances to reach Cognito?
You are trying to set this up in a public ALB. This setup using a private NLB will work, might work with a private ALB as well. You can then set up vmc private links to get at the service's you need access too.

ECS service unreachable from cluster

We are using ECS EC2 as orchestration of docker conatiners.
Also we are using AWS CLOUDMAP/Service discovery to create endpoints of services.
In one of my cluster we are not able to reach endpoint of any service, including the service running in same cluster.It give me below error
closing conenction 0
curl (6) could not resolve host xxx-xxx.test.xxx.org.uk
When i try with the IP instead of domain name like 1x.1XX.7*.2X:port/healtcheck/path it works for all services.
I have check all security groups and NACLS all looks fine.

Private IP address of Azure VM being returned as address in WSDL file when accessing WSDL file from browser connected to an Azure Application Gateway

I have a SOAP Service running on tomcat that is deployed in an Azure Scaling Set. I have an Azure Application gateway that is font ending the scale set. When I try to access the wsdl (/service?wsdl) file via a web browser using the Application Gateway DNS, the wsdl that is returned has the private IP addresses of the VM that processed the file in it. This prevents the endpoints from being accessed since they are private. If I access the wsdl going directly to the back end VM's DNS name, the address returned contains the public host name of the VM that I sent the request to and can be accessed since its public. I don't have this problem when I deploy a similar deployment in the AWS environment using AWS ELB in front of the scaling group.
I am able to get this to work by configuring tomcat connector to use proxyName and proxyPort to specify the host name of the Azure Application Gateway. However, there are other SOAP clients that are required to access the back end VMs directly on that same connector, and by specifying the proxy parameters for the connector forces them to go through the Azure application gateway as well.
I realize that a different tomcat connector can be configured to address this, but this is not an optimal solution for the back end application.
So to the question. Is there some Azure Application gateway configuration setting that I can change, so I can make this work like AWS ELB and not have to use the proxyName tomcat parameter?
Thanks.

AWS API Gateway to .NET Core Web Api running in ECS

EDIT
I now realise that I need to install a certificate on the server and validate the client certificate separately. I'm looking at https://github.com/xavierjohn/ClientCertificateMiddleware
I believe the certificate has to be from one of the CA's listed in AWS doco - http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-supported-certificate-authorities-for-http-endpoints.html
This certificate allows API Gateway to establish a HTTPS connection to the instance and it passes along the client certificate that can be validated.
ORIGINAL POST
I am trying to configure a new microservices environment and I'm having a few issues.
Here is what I'm trying to achieve:
Angular website connects to backend API via API Gateway (URL: gateway.company.com.au)
API Gateway is configured with 4 stages - DEV, UAT, PreProd and PROD
The resources in API Gateway are a HTTP Proxy to the back-end services via a Network Load Balancer. Each service in each stage will get a different port allocated: i.e. 30,000, 30,001, etc for DEV, 31,000, 31,000, etc for UAT
The network load balancer has a DNS of services.company.com.au
AWS ECS hosts the docker containers for the back-end services. These services are .NET Core 2.0 Web API projects
The ECS task definition specifies the container image to use and has a port mapping configured - Host Port: 0 Container Port: 4430. A host port of 0 is dynamically allocated by ECS.
The network load balancer has a listener for each microservice port and forwards the request to a target group. There is a target group for each service for each environment.
The target group includes both EC2 instances in the ECS cluster and ports are dynamically assigned by ECS
This port is then mapped by ECS/Docker to the container port of 4430
In order to prevent clients from calling services.company.com.au directly, the API Gateway is configured with a Client Certificate.
In my Web API, I'm building the web host as follows:
.UseKestrel(options =>
{
options.Listen(new IPEndPoint(IPAddress.Any, 4430), listenOptions =>
{
const string certBody = "-----BEGIN CERTIFICATE----- Copied from API Gateway Client certificate -----END CERTIFICATE-----";
var cert = new X509Certificate2(Encoding.UTF8.GetBytes(certBody));
var httpsConnectionAdapterOptions = new HttpsConnectionAdapterOptions
{
ClientCertificateMode = ClientCertificateMode.AllowCertificate,
SslProtocols = System.Security.Authentication.SslProtocols.Tls,
ServerCertificate = cert
};
listenOptions.UseHttps(httpsConnectionAdapterOptions);
});
})
My DockerFile is:
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80 443
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "microservice.company.com.au.dll"]
When I use Postman to try and access the service, I get a 504 Gateway timeout. The CloudWatch log shows:
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Sending request to http://microservice.company.com.au:30000/service
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Execution failed due to an internal error
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Method completed with status: 504
I've been able to get the following architecture working:
API Gateway
Application Load Balancer - path-based routing to direct to the right container
ECS managing ports on the load balancer
The container listening on HTTP port 80
Unfortunately, this leaves the services open on the DNS of the Application Load Balancer due to API Gateway being able to only access public load balancers.
I'm not sure where it's failing but I suspect I've not configured .NET Core/Kestrel correctly to terminate the SSL using the Client Certificate.
In relation to this overall architecture, it would make things easier if:
The public Application Load Balancer could be used with a HTTPS listener using the Client Certificate of API Gateway to terminate the SSL connection
API Gateway could connect to internal load balancers without using Lambda as a proxy
Any tips or suggestions will be considered but at the moment, the main goal is to get the first architecture working.
I more information is required let me know and I will update the question.
The problem was caused by the security group attached to the EC2 instances that formed the ECS cluster not allowing the correct port range. The security group for each EC2 instance in the cluster needs to allow the ECS dynamic port range.