AWS API Gateway to .NET Core Web Api running in ECS - amazon-web-services

EDIT
I now realise that I need to install a certificate on the server and validate the client certificate separately. I'm looking at https://github.com/xavierjohn/ClientCertificateMiddleware
I believe the certificate has to be from one of the CA's listed in AWS doco - http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-supported-certificate-authorities-for-http-endpoints.html
This certificate allows API Gateway to establish a HTTPS connection to the instance and it passes along the client certificate that can be validated.
ORIGINAL POST
I am trying to configure a new microservices environment and I'm having a few issues.
Here is what I'm trying to achieve:
Angular website connects to backend API via API Gateway (URL: gateway.company.com.au)
API Gateway is configured with 4 stages - DEV, UAT, PreProd and PROD
The resources in API Gateway are a HTTP Proxy to the back-end services via a Network Load Balancer. Each service in each stage will get a different port allocated: i.e. 30,000, 30,001, etc for DEV, 31,000, 31,000, etc for UAT
The network load balancer has a DNS of services.company.com.au
AWS ECS hosts the docker containers for the back-end services. These services are .NET Core 2.0 Web API projects
The ECS task definition specifies the container image to use and has a port mapping configured - Host Port: 0 Container Port: 4430. A host port of 0 is dynamically allocated by ECS.
The network load balancer has a listener for each microservice port and forwards the request to a target group. There is a target group for each service for each environment.
The target group includes both EC2 instances in the ECS cluster and ports are dynamically assigned by ECS
This port is then mapped by ECS/Docker to the container port of 4430
In order to prevent clients from calling services.company.com.au directly, the API Gateway is configured with a Client Certificate.
In my Web API, I'm building the web host as follows:
.UseKestrel(options =>
{
options.Listen(new IPEndPoint(IPAddress.Any, 4430), listenOptions =>
{
const string certBody = "-----BEGIN CERTIFICATE----- Copied from API Gateway Client certificate -----END CERTIFICATE-----";
var cert = new X509Certificate2(Encoding.UTF8.GetBytes(certBody));
var httpsConnectionAdapterOptions = new HttpsConnectionAdapterOptions
{
ClientCertificateMode = ClientCertificateMode.AllowCertificate,
SslProtocols = System.Security.Authentication.SslProtocols.Tls,
ServerCertificate = cert
};
listenOptions.UseHttps(httpsConnectionAdapterOptions);
});
})
My DockerFile is:
FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
EXPOSE 80 443
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "microservice.company.com.au.dll"]
When I use Postman to try and access the service, I get a 504 Gateway timeout. The CloudWatch log shows:
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Sending request to http://microservice.company.com.au:30000/service
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Execution failed due to an internal error
(e4d594b7-c8f3-11e7-8458-ef6f94e65b64) Method completed with status: 504
I've been able to get the following architecture working:
API Gateway
Application Load Balancer - path-based routing to direct to the right container
ECS managing ports on the load balancer
The container listening on HTTP port 80
Unfortunately, this leaves the services open on the DNS of the Application Load Balancer due to API Gateway being able to only access public load balancers.
I'm not sure where it's failing but I suspect I've not configured .NET Core/Kestrel correctly to terminate the SSL using the Client Certificate.
In relation to this overall architecture, it would make things easier if:
The public Application Load Balancer could be used with a HTTPS listener using the Client Certificate of API Gateway to terminate the SSL connection
API Gateway could connect to internal load balancers without using Lambda as a proxy
Any tips or suggestions will be considered but at the moment, the main goal is to get the first architecture working.
I more information is required let me know and I will update the question.

The problem was caused by the security group attached to the EC2 instances that formed the ECS cluster not allowing the correct port range. The security group for each EC2 instance in the cluster needs to allow the ECS dynamic port range.

Related

Do GCP Internal Load Balancers support gRPC with Serverless Negs

I am running a number of Cloud Run services which all have VPC access via a VPC connector and setting all egress to run through this connector. I have an ILB set up which points to a Regional Backend Service with Serverless Network Endpoint Group type. When you select this type you are unable to choose the protocol for the service (HTTP, HTTPS, HTTP/2)
The receiving Cloud Run is set to ingress unauthenticated and to allow internal/cloud-load-balancing.
When my client tries to send messages to my server via an address that resolves to the ILB it fails with a very non-descript error: rpc error: code = Unknown desc =.
I have tried using the direct cloud run url as opposed to going via my ILB and this does work. I would prefer to use my internal DNS though if possible.

aws Internal load balancer net::ERR_NAME_NOT_RESOLVED when calling a service

I have a 3 tier ECS containers application. In presentation tier I have a public subnet where there's an angular app running on nginx server. For that I have application internet-facing load balancer. In the private subnet I have Java Spring REST API service that runs on tomcat server on port 8080, for that there's application internal load balancer. In the other private subnet I have RDS database.
Application client sends requests to internal load balancer url, and renders the response in the application.
While I am able to ssh to ec2 in public subnet and curl to rest service in private subnet and get response:
curl -X POST http://internal-qa-XXXXX-XXXXXXX.eu-west-2.elb.amazonaws.com:8080/api/products/all
I am not able to receive response when accessing the client in the browser. The application runs correctly, however when inspecting in the browser console I see:
POST http://internal-qa-XXXXX-XXXXXXX.eu-west-2.elb.amazonaws.com:8080/api/products/all net::ERR_NAME_NOT_RESOLVED.
I checked containers with Docker logs <container_id> and they run just fine.
Seurity groups and NACL are configured correctly, I even checked with all traffic allowed
Based on the comments.
The issue is most likely caused by the fact that the url endpoint of the internet load balancer is called from the client side, i.e. browser.
Url of internal load balancer isn't publicly callable.
To solve this, either the application has to be modified to use only publicly available endpoints, or the internal load balancer changed into internet facing.

How can I host an SSL Rest API through AWS using a Docker image?

I've gotten a bit lost in the number of services in AWS and I'm having a difficult time finding the answer to what I think is probably a very simple question.
I have a Docker image that's serving a RestAPI over HTTP on port 80. I am currently hosting this on AWS with ECS. It's using Faregate but I could make an EC2 cluster if need be.
The problems are:
1) I currently get a new IP address whenever I run my task, I want a consistent address to access it from. Doesn't need to be a static IP, it could be routed from DNS.
2) It's not using my hostname which I would like to have api.myhostname.com go to the Docker image while www.myhostname.com currently already goes to my Cloudfront CDN serving the web application.
3) There's no SSL and I would need this to be encrypted.
Which services should I be using to make this happen? I looked into API Gateways and didn't find a way to use an ECS task as a backend. I looked into ELB for ECS but load balancers didn't seem to provide a way to make static IPs out of the Docker images.
Thanks.
I'll suggest a service for each of you requirements:
you want to run a Docker container: ECS using FARGATE is the right solution
you want a consistent address: use the Service Load Balancing which is integrated into ECS. [1] You can also achieve consistent addressing using Service Discovery if the price for running a load balancer is too high in your scenario. [2]
you want SSL: AWS Elastic Load Balancing integrates with AWS Certificate Manager (ACM) which allows you to create HTTPS listeners. [3]
you want to use your hostname: use AWS Route53 and an Application Load Balancer. The load balancer receives a hostname by aws automatically and you can then point your custom dns at that entry. [4]
So my advice is:
Create an ECS service which starts your docker container as FARGATE task.
Create a certificate for your HTTPS listener in AWS Certificate Manager. ACM manages your certificates and sends you an email if they are expiring soon. [5]
Use Service Load Balancing with an Application Load Balancer to automatically register any newly created ECS tasks to a target group. Configure the load balancer to listen for incoming traffic on an HTTPS listener and routes it to the target group which has your ECS tasks registered as targets.
References
[1] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
[2] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-service-discovery.html
[3] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html
[4] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/using-domain-names-with-elb.html
[5] https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html

Exposing Istio Ingress Gateway as NodePort to GKE and run health check

I'm running Istio Ingress Gateway in a GKE cluster. The Service runs with a NodePort. I'd like to connect it to a Google backend service. However we need to have an health check that must run against Istio. Do you know if Istio expose any HTTP endpoint to run health check and verify its status?
Per this installation guide, "Istio requires no changes to the application itself. Note that the application must use HTTP/1.1 or HTTP/2.0 protocol for all its HTTP traffic because the Envoy proxy doesn't support HTTP/1.0: it relies on headers that aren't present in HTTP/1.0 for routing."
The healthcheck doesn't necessarily run against Istio itself, but against the whole stack behind the IP addresses you configured for the load balancer backend service. It simply requires a 200 response on / when invoked with no host name.
You can configure this by installing a small service like httpbin as the default path for your gateway.
You might also consider changing your Service to a LoadBalancer type, annotated to be internal to your network (no public IP). This will generate a Backend Service, complete with healthcheck, which you can borrow for your other load balancer. This method has worked for me with nesting load balancers (to migrate load) but not for a proxy like Google's IAP.

SSL certificate for communication between load balancer and servers necessary?

I am using the Google Cloud Platform to implement a REST API which is accessible through HTTPS only using a load balancer.
My setup looks like this:
VM instances:
2 instances wich run the same node.js server. One outputs "server1" the other outputs "server2".
Instance groups:
One instance group which contains both VMs.
Back-end services:
One back-end service which uses the instance groups and a simple health check.
Load balancing:
One load balancer.
Frontend: HTTPS PUBLIC_IP:443 my-ssl-certificate
Backend: My back-end service
Host and path rules: All unmatched (default) => My back-end service (default)
I now configured my domain's (api.domain.com) DNS with an A-Record for PUBLIC_IP. https://api.domain.com's output successfully switches between "server1" and "server2". The load balancer and the HTTPS-certificate my-ssl-certificate is working great! my-ssl-certificate is a Let's Encrypt SSL-certificate for my domain api.domain.com.
Question: Do I need 2 other certificates for my 2 VM instances, when they communicate with the load balancer? Or is this communication internally and doesn't require further SSL-certificates? If I need those certificates, how do I set them up with IPs?
Because accessing my 2 VM instances IPs via https://VM1_PUBLIC_IP resuls in a chrome warning, that the certificate is not valid.
If you are using load-balancer with SSL certificates, then there was no need of public facing VM's, you should kept it private subnets and communication should happen over private ip's between LB and VM.