I have application stack consisting of three services in AWS ECS. I have been planning to implement service mesh using AWS App Mesh. I have followed the following instructions to setup the mTLS for my services.
https://awscloudfeed.com/whats-new/security/how-to-use-acm-private-ca-for-enabling-mtls-in-aws-app-mesh
Using the technique mentioned on the blog I was able to setup the mTLS and communication is working fine from virtual gateway to services.
But when one of the service tries to access another service it fails to make connection. Services are built using NodeJS and one service(let's say A) use request library to call service B. From my understanding of the service mesh, the TLS session initiation should start from the envoy proxy of Service A and terminate in the envoy proxy of Service B. In this case I should have used the service discovery url of the Service B (eg. http://serviceb.example.com) when calling it from the serivce A. While doing so, I get ECONNRESET error with message socket hangup. And while using https protocol (eg https://serviceb.example.com) I get ECONNRESET error with message of TLS error.
But if I disable the client certificate requirement for the service B, I am able to access it from service A with https protocol. Does this mean that if i need to set the mtls in appmesh, i will need to load the client certificate through the application itself? I think the request should have gone through without issue as client certificate is provided through the backed client configuration.
Can you help me understand how app mesh mTLS work and if I am missing something while configuring the app mesh?
Thank You
Related
Anyone can tell me what kind of service fits on this use case below:
I want to expose a public IP that receive HTTPS/HTTP requests and forward the traffic to my services I have in on-prem.
Looking for Azure, AWS, etc, etc, are there some service that serve to my problem?
Regards...
If you are using using Azure and you want HTTPS based request to be sent to your backend APIs (which can be on prem or on any cloud) you can check for Azure API Management (APIM).
You can use the APIM with or without VNET.
APIM can be used in External Mode if you want to integrate a VNET to perform data plane operations which will expose a Public IP as well as a Gateway URL which you can be used to send HTTPS traffic.
Reference:
https://learn.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet?tabs=stv2
https://learn.microsoft.com/en-us/azure/api-management/api-management-key-concepts#scenarios
Additionally, you can also check out Application Gateway
Reference:
https://learn.microsoft.com/en-us/azure/architecture/example-scenario/gateway/firewall-application-gateway
Is it possible to use AWS Application Loadbalancer for RSocket?
An AWS Application Loadbalancer can also be used for WebSocket connections and my project uses RSocket with WebSocket as its transport. This made me wonder if it is possible to use this loadbalancer for RSocket aswell.
On one hand I would think it is possible to use this loadbalancer, as it only receives a connection and passes this to the target RSocket server.
On the other hand, if all RSocket frames go through the loadbalancer, it might not know how to handles these frames, which would make it not possible to use.
I couldn't find much about RSocket and loadbalancing online besides this post .But this is client side loadbalancing and I was looking for server side loadbalancing.
And this post .But this uses LoadBalanceSocketClient while I want to find out if an AWS Application Loadbalancer can be used.
Here follows a simple diagram of what I would like to have (if possible):
The RSocket client connects to the loadbalancer which passes the connection to a RSocket server (for example server A). Then the client and RSocket server A can communicate.
AWS will see this as a typical websocket service. So as long as it lets HTTP/1.1 connections through and lets them upgrade to WebSocket there shouldn't be a problem. This is very standard so it shouldn't be an issue. Ideally it won't see individual frames of the traffic, and you app will handle all frames on a single WebSocket connection. But it looks like the API Gateway support does deal with individual messages https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-set-up-websocket-deployment.html. You should ignore the RSocket client load balancing, and focus on AWS WebSocket routing.
As an example, with GCP (instead of AWS) the complexity is that this bumps you up from AppEngine Standard to Flexible. The demo site https://demo.rsocket.io/ is deployed to GCP and exposes websockets.
The additional kink, is that you possibly want stateful routing if you want client resumption.
Backstory(but possibly can be skipped): The other day, I finished connecting to MySQL full SSL from a Cloud Run service without really doing any SSL cert stuff which was great!!! Just click 'only allow SSL' in GCP and click 'generate server certs', allow my Cloud Run service to have access to database instance, swap out tcp socket factory with google's factory and set some props and it worked which was great!
PROBLEM:
NOW, I am trying to figure out the secure Google Cloud Run service to Cloud Run service security and reading
https://cloud.google.com/run/docs/authenticating/service-to-service
which has us requesting a token over HTTP??? Why is this not over HTTPS? Is communication from my Docker container to the token service actually encrypted?
Can I communicate HTTP to HTTP between two Cloud Run services and it will be encrypted?
thanks,
Dean
From https://cloud.google.com/compute/docs/storing-retrieving-metadata#is_metadata_information_secure:
When you make a request to get information from the metadata server, your request and the subsequent metadata response never leave the physical host that is running the virtual machine instance.
The traffic from your container to the metadata server at http://metadata/ stays entirely within your project and thus SSL is not required, there is no opportunity for it to be intercepted.
I can access the kubernetes api to get the deployments using Kubernetes proxy.
I get the list of deployments with:
127.0.0.1:8001/apis/apps/v1/deployments
This is getting the deployments locally. But what should I use the HOST and PORT to access the deployments from the cluster not locally but using the aws server.
I am new to Kubernetes, if the question is not understandable please let me know.
Any help is appreciated.
kubectl proxy forwards your traffic localy adding your authentication for you
Your public api endpoint can be exposed in different ways (or it can be completely inaccessible from public network) depending on your cluster setup.
In most cases it would be exposed on something like ie. https://api.my.cluster.fqdn or with custom port like https://api.my.cluster.fqdn:6443 and it would require authentication by ie. getting a baerer token or using client certificate. It is reasonable to use some client library to connect to API.
Let me start this by saying I am fairly new to k8s. I'm using kops on aws.
I currently have 3 deployments on a cluster.
FrontEnd nginx image serving an angular web app. One pod. External service.
socket.io server. Internal service. (this is a chat application, and we decided to separate this server from our api. Was this a good idea?)
API that is requested by both the socket.io server and the web application. Internal Service (should it be external?)
The socket.io deployment and API seem to be able to communicate through the cluster ips and corresponding services I have set up for the deployments; however, the webapp times out when querying the API.
From the web app, I am querying the API using the API's cluster IP address. Should I be requesting a different address?
Additionally, what is the best way to configure these addresses in my files without having to change the addresses in the files each time I create a new deployment? (the cluster ip addresses change every time you tare down and recreate the deployment)
If I understood correctly your frontend web application depends on API server, so that it sends requests to it. In such case, your API service should be available from outside of the cluster. It means it should be exposed as the NodePort or LoadBalancer service type.
P.S. you can refer to service using ClusterIP only inside of the cluster.