I am new to Kubernetes and I am transitioning some of the apps to K8S Cluster.
I have tremendous use of S3 in the containers that I used through Roles in AWS.
I have configured a 2 node cluster using kubeadm using Ec2 instances(not EKS).
But I am stuck as whenever I run the container through pods I get error:
**Could not connect to the endpoint URL:"https://<bucket_name>.s3.amazonaws.com/**
I have IAM roles attached to the Ec2 instances that are configured as master and nodes.
Please suggest the best way to establish S3 connection through pods.
Any document/gitrepo link will be highly appreciated. Thanks in advance.
Related
I have created an EKS cluster with the Managed Node Groups.
Recently, I have deployed Redis as an external Load Balancer service.
I am trying to to set up an authenticated connection to it via NodeJS and Python microservices but I am getting Connection timeout error.
However, I am able to enter into the deployed redis container and execute the redis commands.
Also, I was able to do the same when I deployed Redis on GKE.
Have I missed some network configurations to allow traffic from external resources?
The subnets which the EKS node is using are all public.
Also, while creating the Amazon EKS node role, I have attached 3 policies to this role as suggested in the doc -
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
It was also mentioned that -
We recommend assigning the policy to the role associated to the Kubernetes service account instead of assigning it to this role.
Will attaching this to the Kubernetes service account, solve my problem ?
Also, here is the guide that I used for deploying redis -
https://ot-container-kit.github.io/redis-operator/guide/setup.html#redis-standalone
i have a cluster running on aws ec2 and not a managed EKS, i'm trying to add a loadbalancer to the cluster without restarting it or initializing a new node, is that possible ? i've already set the permission and tags related to this post https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
But the thing is that we must add the flag --cloud-provider=aws to the Kubelet before adding the node to the cluster.
Is there any other options or other way to do it ?
[kubectl get nodes][1]
You can try using AWS load balancer controller, it works with both managed and self-managed K8s clusters https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/
Zee
I want statping to be independent of the infra it is monitoring. But I want to check the services uptime which are on clusterIP inside the k8s EKS cluster. Will setting up kubeconfig on the EC2 instance help ?
There are multiple ways to access Kubernetes Services from the statping EC2 Instance.
All of them are discussed in https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/
https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#so-many-proxies
kubectl proxy https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#using-kubectl-proxy is a good option for your use case if you already have kubeconfig on the statping EC2 Instance.
You can use https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls to construct the Proxy URLs.
I am new to Kubernetes and have a requirement in which have to setup an ETCD cluster behind an ELB. Our K8s cluster will be hosted using Rancher. Can anyone please share the steps or link for the same.
There are 2 options:
1: If you will import cluster into rancher, then #suren is right. You basically need to create Kubernetes cluster that comforms all of your needs then import that into Rancher.
2: If you plan to launch clusters using Rancher, then you need to create NodeTemplate that uses AWS EC2 for etcd nodes. Then you can launch cluster by ticking master, etcd and worker nodes separately and reference your etcd NodeTemplate in etcd node group.
We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.