AWS EKS - On prem Worker node - amazon-web-services

I am new to AWS EKS - I have an application for which I need one worker node (a pod) of the Kubernetes to run on my on-premise infrastructure. Is that possible and if yes then how can I achieve that ?

In theory you can run EKS and on-prem kubernetes clusters at the same time, but managing them via a single federation control plane. But I've never tried to use it with EKS, although EKS is mostly vanilla kubernetes, so it should work.

Related

Unable to understand AWS Fargate Pricing

We have a requirement of deploying Redis as an External Load Balancer Service on AWS-EKS (Elastic Kubernetes Service). As Redis will be a statefulset which of the following combination will be the best fit with EKS -
EKS with Self-managed nodes
EKS with Managed Node Groups
EKS with AWS Fargate
Although, I have studied that AWS Fargate should be used for deploying stateless applications.
Fargate, thus far, has been ideal for running stateless containerized workloads in a secure and cost-effective manner. Secure because Fargate runs each pod in a VM-isolated environment and patches nodes automatically. Cost-effective because, in Fargate, you only pay for the compute resources you have configured for your pod.
I didn't understand how stateless applications will be cost-effective. Kindly verify the below statement. It would be quite helpful.
In the stateless applications, the container will be in the running state, when there will be requests and otherwise no. of instances will be 0 just as in GCP Cloud Run.
Whereas in the stateful ones, the container will be running every time. And for this reason, we should use, EC2 instances for stateful applications

How can I add external node to EKS cluster?

I am considering to use AWS EKS to deploy kubernete cluster but I wonder whether it supports bring other nodes in the cluster? Other nodes may come from GCP, on-prem infra. etc.

How do I achieve multicluster pod to pod communication in kubernetes?

I have two kubernetes clusters, one in Google Cloud Platform over GKE and one in Amazon Web Services built using Kops.
How do I implement multi cluster communication between a pod in AWS Kops Cluster and a pod in GKE ?
My Kops cluster uses flannel-vxlan mode of networking, hence there are no routes in the route table on the AWS side. So creating a VPC level VPN tunnel is not helpful. I need to achieve a VPN tunnel between the Kubernetes clusters and manage the routes.
Please advise me as to how I can achieve this.

aws kubernetes inter cluster communication for shipping fluentd logs

We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.

Does AWS support Kubernetes?

I've read that AWS does not support Kubernetes and builds their own Docker orchestration engine EC2 Container Service. However, on Kubernetes getting-started -page there is a guide on how to run Kubernetes on AWS:
https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/aws.md
Which is right?
You can install Kubernetes on a normal Amazon EC2 server.
The new container service is a separate offering by Amazon, called ECS.
EDIT: AWS released in 2018 a new container service for Kubernetes called EKS: https://aws.amazon.com/eks/
Amazon Elastic Container Service for Kubernetes (Amazon EKS) makes it easy to deploy, manage, and scale containerized applications using Kubernetes on AWS. Amazon EKS runs the Kubernetes management infrastructure for you across multiple AWS availability zones to eliminate a single point of failure.
Kubernetes provides scripts to simple setup a cluster on a set of EC2 machines. The setup does pretty much everything needed to get started quickly.
Here is the link: https://github.com/GoogleCloudPlatform/kubernetes/blob/release-1.0/docs/getting-started-guides/aws.md
Yes its possible to setup Kubernetes on AWS See:http://kubernetes.io/v1.0/docs/getting-started-guides/aws.html
You can also manually setup Kubernetes on AWS by launching a EC2 instance
Foe setting in Redhat ami https://access.redhat.com/articles/1353773
(Note: Kubernetes needs flannel network to be setup for managing networking between docker containers running on different hosts(minions)
Amazons Container Service is unrelated to Kubernetes.
There are 3 main options for installing Kubernetes on AWS:
CoreOS have a cli for installing and managing kubernetes on aws: https://coreos.com/kubernetes/docs/latest/kubernetes-on-aws.html
Kubernetes have some scripts for setting up a cluster on AWS: http://kubernetes.io/docs/getting-started-guides/aws/
Manual installation on EC2. Lots of options here: http://kubernetes.io/docs/getting-started-guides/#cloud
As an aside minikube is now a thing which is nice for running locally to try stuff out:
http://kubernetes.io/docs/getting-started-guides/minikube/
AWS recently launched EKS, which provides a managed k8s master nodes. This should be
what you are looking for.
Yes. You can use kubeadm to install kubernetes on EC2 instances.
There are other tools also available.
KOPS
EKS
Kubeadm