I have set up an EKS cluster and I am trying to connect application pod to ElastiCache endpoint. I put both in same VPC and configured in/out security groups for them. Unfortunately while trying to telnet from pod to cache endpoint, it says "xxx.yyy.zzz.amazonaws.com: Unknown host". Is it even possible to make such a connection?
Yes, if the security groups allow connectivity then you can connect from EKS pods to Elasticache. However, be aware that the DNS name may not resolve for some time (up to around 15 minutes) after you launch the Elasticache instance/cluster.
I found an answer in the issue from cortexproject(monitoring tool based on Grafana stack).
Solved it using "addresses" instead "host" with the address of my memcached. It worked.
PS: "addresses" option isn't documented in the official documentation.
It has to view like this:
memcached_client:
addresses: memcached.host
Related
I thought this was going to be easy but unfortunately, I was wrong. I just made an AWS-hosted Grafana workspace. I'd like to query an AWS RDS instance for some data.
I am struggling to find out how I would add the Hosted Grafana instance into a security group so it would be allowed to access the RDS.
I did check the Docs!
Has anyone done this before that could help me out?
Thanks!
Ran into a similar problem, AWS Team told me that if your database is sitting in a non-default VPC and is publically accessible, then you have to whitelist IP address in your security group based on your region of managed grafana.
Here is the list of ip addresses based on the region.
• us-east-1: 35.170.12.166 54.88.16.229 3.234.162.252 54.160.119.132
54.196.72.13 3.213.190.135 54.83.225.191 3.234.173.51 107.22.41.194
• eu-central-1: 18.185.12.232, 3.69.106.181, 52.29.127.210
• us-west-2: 44.230.70.68, 34.208.176.166, 35.82.14.62
• us-east-2: 18.116.131.87, 18.117.203.54
• eu-west-1: 52.30.158.152, 54.247.159.227, 54.170.69.237, 52.210.87.10,
54.73.6.128, b54.78.34.200, 54.216.218.40, 176.34.91.249, 34.246.52.247
• us-east-2: 35.170.12.166, 54.88.16.229, 3.234.162.252, 54.160.119.132,
54.196.72.13, 3.213.190.135, 54.83.225.191, 3.234.173.51, 107.22.41.194
You can refer the documentation provided by aws on how to connect to the database at:
AMG Postgresql Connection
I had to do the same thing, and in the end the only way I could find out the IP address was to look through the VPC flow logs to see what was hitting the IP address of the RDS instance.
AWS has many IP addresses it can use for this and unfortunately there is no way to assign a specific IP address or security group to grafana.
So you need to set up a few things to get it to work, and there is no guarantee that the IP address for your AWS hosted Grafana won't change on you.
If you don't have it already, set up a VPC for your AWS infrastructure. Steps 1-3 in this article will set up what you need to do.
Set up Flow Logs for your VPC. These will capture the traffic in and out of the network interfaces and you can filter on the IP address of your RDS instance and the Postgres port. This article explains how to set it up.
Once you capture the IP address you can add it to the security group for the RDS instance.
One thing I have found is that I get regular time outs when querying RDS Postgres from AWS hosted grafana. It works fine, then it doesn't, then it works again. I've not found a to increase the timeout or solve the issue yet.
I have what I think is a reasonably straightforward setup in Google Cloud - A GKE cluster, a Cloud SQL instance, and a "Click-To-Deploy" Kafka VM instance.
All of the resources are in the same VPC, with firewall rules to allow all traffic to the internal VPC CIDR blocks.
The pods in the GKE cluster have no problem accessing the Cloud SQL instance via its private IP address. But they can't seem to access the Kafka instance via its private IP address:
# kafkacat -L -b 10.1.100.2
% ERROR: Failed to acquire metadata: Local: Broker transport failure
I've launched another VM manually into the VPC, and it has no problem connecting to the Kafka instance:
# kafkacat -L -b 10.1.100.2
Metadata for all topics (from broker -1: 10.1.100.2:9092/bootstrap):
1 brokers:
broker 0 at ....us-east1-b.c.....internal:9092
1 topics:
topic "notifications" with 1 partitions:
partition 0, leader 0, replicas: 0, isrs: 0
I can't seem to see any real difference in the networking between the containers in GKE and the manually launched VM, especially since both can access the Cloud SQL instance at 10.10.0.3.
Where do I go looking for what's blocking the connection?
I have seen that the error is relate to the network,
however if you are using gke on the same VPC network, you will ensure to configure properly the Internal Load Balancer, also I saw that this product or feature is BETA version, this means that it is not yet guaranteed to work as expected, another suggestion is that you ensure that you are not using any policy, that maybe block the connection, I found the next article on the community that maybe help you to solve it
This gave me what I needed: https://serverfault.com/a/924317
The networking rules in GCP still seem wonky to me coming from a long time working with AWS. I had rules that allowed anything in the VPC CIDR blocks to contact anything else in those same CIDR blocks, but that wasn't enough. Explicitly adding the worker nodes subnet as a source for a new rule opened it up.
I am getting the following error while creating a gateway for the sample bookinfo application
Internal error occurred: failed calling admission webhook
"pilot.validation.istio.io": Post
https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s:
Address is not allowed
I have created a EKS poc cluster using two node-groups (each with two instances), one with t2.medium and another one is with t2.large type of instances in my dev AWS account using two subnets with /26 subnet with default VPC-CNI provided by EKS
But as the cluster is growing with multiple services running, I started facing issues of IPs not available (as per docs default vpc-cni driver treat pods as an EC2 instance)
to avoid same I followed following post to change networking from default to weave
https://medium.com/codeops/installing-weave-cni-on-aws-eks-51c2e6b7abc8
because of same I have resolved IPs unavailability issue,
Now after network reconfiguration from vpc-cni to weave
I am started getting above issue as per subject line for my service mesh configured using Istio
There are a couple of services running inside the mesh and also integrated kiali, prometheus, jaeger with the same.
I tried to have a look at Github (https://github.com/istio/istio/issues/9998) and docs
(https://istio.io/docs/ops/setup/validation/), but could not get a proper valid answer.
Let me if anyone face this issue and have partial/full solution on this.
This 'appears' to be related to the switch from AWS CNI to weave. CNI uses the IP range of your VPC while weave uses its own address range (for pods), so there may be remaining iptables rules from AWS CNI, for example.
Internal error occurred: failed calling admission webhook "pilot.validation.istio.io": Post https://istio-galley.istio-system.svc:443/admitpilot?timeout=30s: Address is not allowed
The message above implies that whatever address istio-galley.istio-system.svc resolves to, internally in your K8s cluster, is not a valid IP address. So I would also try to see what that resolves to. (It may be related to coreDNS).
You can also try the following these steps;
Basically, (quoted)
kubectl delete ds aws-node -n kube-system
delete /etc/cni/net.d/10-aws.conflist on each of the node
edit instance security group to allow UDP, TCP on 6873, 6874 ports
flush iptables, nat, mangle, filter
restart kube-proxy pods
apply weave-net daemonset
delete existing pods so the get recreated in Weave pod CIDR's address-space.
Furthermore, you can try reinstalling everything from the beginning using weave.
Hope it helps!
I am deploying a laravel installation in AWS, everything runs perfectly when I allow it to recieve all inbound traffic (EC2>Network&Security>Security Groups>Edit inbound rules.), if I turn off inbound traffic and limit it to an IP it doesnt load the webpage it gives me this error:
PDO Exception SQLSTATE[HY000] [2002] Connection timed out
However for security reasons I dont want this setup like this, I dont want anyone being able to even try to reach my webapp. Everything is being hosted in AWS, I dont have any external entities, its running in RDS and EC2. I added en elastic IP address and whitelisted it, but that didnt work either. I followed every step in this tutorial : http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-laravel-tutorial.html#php-laravel-tutorial-generate
Environmental variables are working as well as dependencies, well.. pretty much everything unless I restrict inbound traffic as I mentioned.
How do I whitelist AWS own instance then to make this work with better security?
Thank you!
I think part of this answer is what you may be looking for.
You should enable inbound access from the EC2 security group associated with your EC2 instance, instead of the EC2 IP address.
More than just adding an elastic IP address to your AWS instance you need to do two more things.
Assign the elastic IP to your AWS instance ( yes is not the same as just adding it to the instance, you must specify )
White list the internal IP that it generates once you link it to your app.
?????
Profit
I'm a newbie in some of the AWS services. I was following this documentation link:
http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/GettingStarted.ConnectToCacheNode.Redis.html
And I already installed redis-cli with brew in my computer(I'm in a mac) and I'm still having the same error when trying to connect to the node:
$ redis-cli -h mynode.abcdef.0001.usw2.cache.amazonaws.com -p 6379
Error:
Could not connect to Redis at mynode.abcdef.0001.usw2.cache.amazonaws.com:6379: Operation timed out
Yes, I have configured the VPC Security Group to allow all inbound traffic to my Node and the problem persist.
Security Group Conf:
Node Description:
Any ideas?
You can't connect to eleasticache from outside of aws. It just the way it is setup. Would be nice to do for debugging and development, but for production it doesn't really make sense to introduce that much latency into a system that main purpose is to give as-fast-as-possible results.
From AWS FAQ:
Please note that IP-range based access control is currently not
enabled for Cache Clusters. All clients to a Cache Cluster must be
within the EC2 network, and authorized via security groups as
described above.
http://aws.amazon.com/elasticache/faqs/
External access to Elasticache resources is possible yet discouraged:
Elasticache is a service designed to be used internally to your VPC.
External access is discouraged due to the latency of Internet traffic
and security concerns. However, if external access to Elasticache is
required for test or development purposes, it can be done through a
VPN.
Guide: Accessing ElastiCache Resources from Outside AWS