Host and Port to access the Kubernetes api - amazon-web-services

I can access the kubernetes api to get the deployments using Kubernetes proxy.
I get the list of deployments with:
127.0.0.1:8001/apis/apps/v1/deployments
This is getting the deployments locally. But what should I use the HOST and PORT to access the deployments from the cluster not locally but using the aws server.
I am new to Kubernetes, if the question is not understandable please let me know.
Any help is appreciated.

kubectl proxy forwards your traffic localy adding your authentication for you
Your public api endpoint can be exposed in different ways (or it can be completely inaccessible from public network) depending on your cluster setup.
In most cases it would be exposed on something like ie. https://api.my.cluster.fqdn or with custom port like https://api.my.cluster.fqdn:6443 and it would require authentication by ie. getting a baerer token or using client certificate. It is reasonable to use some client library to connect to API.

Related

Setting up https secure connection for inter container communication via ecs fargate

I haven’t found an answer to this after looking through a lot of stackoverflow submittals and other documentation and Im curious how to go about this.
The setup is simple. I’m needing two clusters in fargate to communicate over a secure https connection (not http). I’ve got the cert I need for this and a public facing alb and the service deployed. All that is fine. I’ve been told to create service discovery multivalue for this and set the r53 namespace for this. Done. Still I can only get the clusters to communicate over http not https. So only over port 80.
I was the told to create an internal alb, attach the same cert to it that I did with the public facing one, and then mimic the path of the public facing alb and then somehow that will magically work. I’m confused by this and needing clarification. This should seem very simple. I need cluster A to communicate with cluster B over a secure https connection. How do I do this using fargate?

Internal communication among services with app mesh in ECS

I have application stack consisting of three services in AWS ECS. I have been planning to implement service mesh using AWS App Mesh. I have followed the following instructions to setup the mTLS for my services.
https://awscloudfeed.com/whats-new/security/how-to-use-acm-private-ca-for-enabling-mtls-in-aws-app-mesh
Using the technique mentioned on the blog I was able to setup the mTLS and communication is working fine from virtual gateway to services.
But when one of the service tries to access another service it fails to make connection. Services are built using NodeJS and one service(let's say A) use request library to call service B. From my understanding of the service mesh, the TLS session initiation should start from the envoy proxy of Service A and terminate in the envoy proxy of Service B. In this case I should have used the service discovery url of the Service B (eg. http://serviceb.example.com) when calling it from the serivce A. While doing so, I get ECONNRESET error with message socket hangup. And while using https protocol (eg https://serviceb.example.com) I get ECONNRESET error with message of TLS error.
But if I disable the client certificate requirement for the service B, I am able to access it from service A with https protocol. Does this mean that if i need to set the mtls in appmesh, i will need to load the client certificate through the application itself? I think the request should have gone through without issue as client certificate is provided through the backed client configuration.
Can you help me understand how app mesh mTLS work and if I am missing something while configuring the app mesh?
Thank You

Cloud Service like Reverse Proxy?

Anyone can tell me what kind of service fits on this use case below:
I want to expose a public IP that receive HTTPS/HTTP requests and forward the traffic to my services I have in on-prem.
Looking for Azure, AWS, etc, etc, are there some service that serve to my problem?
Regards...
If you are using using Azure and you want HTTPS based request to be sent to your backend APIs (which can be on prem or on any cloud) you can check for Azure API Management (APIM).
You can use the APIM with or without VNET.
APIM can be used in External Mode if you want to integrate a VNET to perform data plane operations which will expose a Public IP as well as a Gateway URL which you can be used to send HTTPS traffic.
Reference:
https://learn.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet?tabs=stv2
https://learn.microsoft.com/en-us/azure/api-management/api-management-key-concepts#scenarios
Additionally, you can also check out Application Gateway
Reference:
https://learn.microsoft.com/en-us/azure/architecture/example-scenario/gateway/firewall-application-gateway

How can a beginner use AWS services to host a public server and create endpoints for a web application

I have been in the front end development before, but this is my first time researching how to use AWS services to host a public server for our web application. Currently, I have trouble understanding how does EC2 and API gateway work with each other. And I also have some trouble understanding how does public server host a web application in this case. I have reads a number of tutorials, but I have trouble understanding where does this API endpoint generate in this case. I saw that API gateway could generate an endpoint, but in this case, do I still use EC2 to host the web application? And how can the url from these 2 connect to each other? Yeah, I think I got messy on understanding this web app structure especially on server side. Coud someone help me on breif explain on these 2 services and maybe some useful tutorial that I could reference? As a beginner, everything is so confusing to me. Thank you so much!!
The simple approach is deploy your web/app server in EC2 instance and check on which port yours service is running e.g. 8080 , go to attached securty group of that EC2 instance and open port for 8080, you can also attach the elastic IP so that even after restart EC2 instance your IP will never change and then access your application publically using http;//<elastic-ip>:8080/<>
btw best approach is to use ELB on ECS/EKS and then use API gateway deploy your static content in S3 and use cloudfront.

Pre-deploy development communication with an Internal Kubernetes service

I'm investigating a move to Kubernetes (coming from AWS ECS). But I haven't solved the local development issue when depending on internal services.
Let me elaborate:
When developing and testing microservices, before they are deployed as a Kubernetes Service I want to be able to talk to other, internal Kubernetes Services. As there are > 20 microservices I have a Kubernetes cluster running latest development versions. I can't run a MiniKube.
example:
I'm developing an user-service which needs access to the email service. The Email service is already on Kubernetes and is an internal service.
So before the user-service is deployed I want to be able to talk to the internal email service for dev/testing. I can't make use of K8S nice service discovery env vars.
As we currently already have a VPN up to restrict DEV env to testers/development only, could I use this VPN to provide access to the Kubernetes-Service IP-addresses? I do have Kubernetes DEV-env on the same VPC as the VPN is in.
If you deploy your internal services as type NodePort, then you can access them over your VPN via that nodePort. NodePorts can be dynamically allocated or you can customize them to be 'static' where they are known by you up front.
When developing an app on your local machine, you can access the dependent service by that NodePort.
As an alternative, you can use port-forwarding from kubectl (https://kubernetes.io/docs/user-guide/connecting-to-applications-port-forward/) to forward a pod to your local machine. (Note: This only handles traffic to a pod not a service).
Telepresence (http://telepresence.io) is designed for this scenario, though it presumes developers have kubectl access to the staging/dev cluster.