Unable to connect to Redis External Load Balancer Service - amazon-web-services

I have created an EKS cluster with the Managed Node Groups.
Recently, I have deployed Redis as an external Load Balancer service.
I am trying to to set up an authenticated connection to it via NodeJS and Python microservices but I am getting Connection timeout error.
However, I am able to enter into the deployed redis container and execute the redis commands.
Also, I was able to do the same when I deployed Redis on GKE.
Have I missed some network configurations to allow traffic from external resources?
The subnets which the EKS node is using are all public.
Also, while creating the Amazon EKS node role, I have attached 3 policies to this role as suggested in the doc -
AmazonEKSWorkerNodePolicy
AmazonEC2ContainerRegistryReadOnly
AmazonEKS_CNI_Policy
It was also mentioned that -
We recommend assigning the policy to the role associated to the Kubernetes service account instead of assigning it to this role.
Will attaching this to the Kubernetes service account, solve my problem ?
Also, here is the guide that I used for deploying redis -
https://ot-container-kit.github.io/redis-operator/guide/setup.html#redis-standalone

Related

Is it possible to add a elb (cloud provider) to an existing kubernetes cluster running on RHEL8 EC2?

i have a cluster running on aws ec2 and not a managed EKS, i'm trying to add a loadbalancer to the cluster without restarting it or initializing a new node, is that possible ? i've already set the permission and tags related to this post https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd
But the thing is that we must add the flag --cloud-provider=aws to the Kubelet before adding the node to the cluster.
Is there any other options or other way to do it ?
[kubectl get nodes][1]
You can try using AWS load balancer controller, it works with both managed and self-managed K8s clusters https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/
Zee

Acess S3 from a pod in Kubernetes Cluster

I am new to Kubernetes and I am transitioning some of the apps to K8S Cluster.
I have tremendous use of S3 in the containers that I used through Roles in AWS.
I have configured a 2 node cluster using kubeadm using Ec2 instances(not EKS).
But I am stuck as whenever I run the container through pods I get error:
**Could not connect to the endpoint URL:"https://<bucket_name>.s3.amazonaws.com/**
I have IAM roles attached to the Ec2 instances that are configured as master and nodes.
Please suggest the best way to establish S3 connection through pods.
Any document/gitrepo link will be highly appreciated. Thanks in advance.

Deploying Grafana as a FARGATE service in an AWS ECS cluster

I'm having some issues with a Grafana deployment. After deploying Grafana, I cant change the default password for the admin account, which you have to do the first time you launch Grafana. I log in with the default credentials, then get prompted to enter a new password. When I do, I get an "unauthorized" error. Looking at the browsers console, it seems to give a 404 error when I try to submit a new password.
I'm using an RDS instance to store Grafana user data. The RDS instance is in the same subnet as the ECS cluster. I've attached the AmazonRDSDataFullAccess policy to the ECS task role but that did not help. I also tried making the RDS instance publicly available but that was also not helpful.
I'm using Grafana version 6.5.0. I was using the latest 7.1 but downgraded hoping it would solve my current issue.
Firstly make sure your RDS database has a security group allowing inbound access from the ECS cluster. This will grant you the inbound access to the RDS database that are required.
As Fargate is serverless, a node could be destroyed so any local configuration would be gone. As you're using RDS you should make sure you're using environment variables to specify the DB connection details.
Finally add these to your task definition, using the environment item. For secrets such as password for the RDS db use the secrets option.

Fargate Task with Nat Gateway fails to connect with RDS database

Basically, I'm follow these two guides:
Deploying Hasura on AWS with Fargate, RDS and Terraform
Deploying Containers on Amazon’s ECS using Fargate and Terraform: Part 2
I have:
Postgres RDS Database deployed in 'Multi-AZ'
My python/flask app deployed in Fargate across multiple AZ's
I run a migration inside the task definition before the app
ALB Load balancing between the tasks
Logging for RDS, ECS and ALB into Cloudwatch Logs.
A NAT gateway with an Elastic IP for each private subnet to get internet connectivity
A new route table for the private subnets
NO certificates
I use terraform 0.12 for the deploy.
The repository is on ECR
But...
My app can't connect to the RDS database:
sqlalchemy.exc.OperationalError
(psycopg2.OperationalError): FATAL: password authentication failed for user "postgres"
These are the logs on pastebin-logs
I've already tried changing the password to a very simple one, before deploy, on the console directly, opening ports, turning access public, changing private to public subnet, etcetera, etcetera...
Please, I have a week with this error!!!
UPDATE
I inject the database credentials in this way:
pastebin-terraform
I cannot comment, but I mean this as a comment.
What does the security group egress look like on your ECS service that runs the task? You need to make sure it can talk to the RDS, usually on port 5432.

aws kubernetes inter cluster communication for shipping fluentd logs

We got multiple k8s clusters on AWS (using EKS) each on its own VPC but they are VPC peered properly to communication to one central cluster which has an elasticsearch service that will collect logs from all clusters. We do not use AWS elasticsearch service but rather our own inside kubernetes.
We do use an ingress controller on each cluster and they have their own internal AWS load balancer.
I'm getting fluentd pods on each node of every cluster (through a daemonset) but it needs to be able to communicate to elasticsearch on the main cluster. Within the same cluster I can ship logs fine to elasticsearch but not from other clusters as they need to be able to access the service within that cluster.
What is some way or best way to achieve that?
This has been all breaking new ground for me so I wanted to make sure I'm not missing something obvious somewhere.