We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.
Related
I have a few services in an AWS ECS cluster, using Fargate launch type. Those services are all in the same VPC and discover each other using Service Connect. The whole thing is behind an Application Load Balancer (ALB) which handles HTTP(S) traffic.
Now I'd like to access one of those services (a database) from an AWS Batch Job. The batch job is currently launched into the same VPC as well. Unfortunately (but not suprisingly) the batch job can't find the database service, because batch jobs seem to not use Service Connect.
How can I make this database service discoverable and accessible for the AWS Batch Job?
I am running an AWS EKS cluster and in that, there are multiple applications running, my eks is in a private subnet so to access those applications I am using a VPN and creating internal ALBs to access some application dashboard in the browser. I am able to get their dashboard in the browser but now I am trying to make a single alb to access all these applications.
I want to configure a single alb and with that, I want to call my application which is running in my eks cluster.
Suppose, I have an application uiserver running in my eks cluster, I want an ALB on which I call alburl/uiserver/somequery, and then it will direct me to alburl/somequery and also call my specific service uiserver.
I am not getting anything to configure this. If anyone has any idea about configuring of this type of ALB then please reply.
Thanks
I have two kubernetes clusters running inside AWS EKS. How can I connect them both so that both can communicate and share data ?
On one cluster only stateless applications are running while on another stateful like Redis DB, RabbitMQ etc.
Which will be the easiest way to setup communication ?
If you have a specific cluster to run DBs and other private stateful workloads, then ensure that your worker nodes for that EKS cluster are private.
Next step would be to create service resource to expose your Redis DB with an internal endpoint. You can achieve it by specifying following:
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
With the above you are going to have entire cluster and stateful workloads exposed using internal endpoints. Once this is done, you have two options to connect your VPCs.
VPC peering to allow one cluster to connect with the other.
Transit Gateway which two VPCs will use to communicate privately.
I will be following the suggested approach by #marcincuber to use internal load balancer.
However, I also got one another workaround exposing the Redis, RabbitMQ service type as LoadBalancer.
Since my both cluster in the same VPC there is no need of VPC peering or any gateway setup, I am thinking to restrict the traffic via using Kubernetes default service loadBalancerSourceRanges.
I would like to write a dataframe into Elasticsearch from within Databricks.
My Elasticsearch cluster is hosted on AWS and Databricks is spinning up EC2 instances with a certain role. That role has the permission to interact with my Elasticsearch cluster but for some reason, I seem not to be able to even PING the Elasticsearch cluster.
Do I need to find a way to squeeze both my Databricks workers and my Elasticsearch cluster into the same VPC? Sounds like a CloudFormation nightmare.
If you've got ES running in another VPC then you'll need either private link or peering to ensure the workers can access it. For isolation and to avoid issues with IP limits for your workers, it would be better to keep ES and DB in different VPCs.
I have two kubernetes clusters, one in Google Cloud Platform over GKE and one in Amazon Web Services built using Kops.
How do I implement multi cluster communication between a pod in AWS Kops Cluster and a pod in GKE ?
My Kops cluster uses flannel-vxlan mode of networking, hence there are no routes in the route table on the AWS side. So creating a VPC level VPN tunnel is not helpful. I need to achieve a VPN tunnel between the Kubernetes clusters and manage the routes.
Please advise me as to how I can achieve this.