I thought this was going to be easy but unfortunately, I was wrong. I just made an AWS-hosted Grafana workspace. I'd like to query an AWS RDS instance for some data.
I am struggling to find out how I would add the Hosted Grafana instance into a security group so it would be allowed to access the RDS.
I did check the Docs!
Has anyone done this before that could help me out?
Thanks!
Ran into a similar problem, AWS Team told me that if your database is sitting in a non-default VPC and is publically accessible, then you have to whitelist IP address in your security group based on your region of managed grafana.
Here is the list of ip addresses based on the region.
• us-east-1: 35.170.12.166 54.88.16.229 3.234.162.252 54.160.119.132
54.196.72.13 3.213.190.135 54.83.225.191 3.234.173.51 107.22.41.194
• eu-central-1: 18.185.12.232, 3.69.106.181, 52.29.127.210
• us-west-2: 44.230.70.68, 34.208.176.166, 35.82.14.62
• us-east-2: 18.116.131.87, 18.117.203.54
• eu-west-1: 52.30.158.152, 54.247.159.227, 54.170.69.237, 52.210.87.10,
54.73.6.128, b54.78.34.200, 54.216.218.40, 176.34.91.249, 34.246.52.247
• us-east-2: 35.170.12.166, 54.88.16.229, 3.234.162.252, 54.160.119.132,
54.196.72.13, 3.213.190.135, 54.83.225.191, 3.234.173.51, 107.22.41.194
You can refer the documentation provided by aws on how to connect to the database at:
AMG Postgresql Connection
I had to do the same thing, and in the end the only way I could find out the IP address was to look through the VPC flow logs to see what was hitting the IP address of the RDS instance.
AWS has many IP addresses it can use for this and unfortunately there is no way to assign a specific IP address or security group to grafana.
So you need to set up a few things to get it to work, and there is no guarantee that the IP address for your AWS hosted Grafana won't change on you.
If you don't have it already, set up a VPC for your AWS infrastructure. Steps 1-3 in this article will set up what you need to do.
Set up Flow Logs for your VPC. These will capture the traffic in and out of the network interfaces and you can filter on the IP address of your RDS instance and the Postgres port. This article explains how to set it up.
Once you capture the IP address you can add it to the security group for the RDS instance.
One thing I have found is that I get regular time outs when querying RDS Postgres from AWS hosted grafana. It works fine, then it doesn't, then it works again. I've not found a to increase the timeout or solve the issue yet.
Related
I have an EC2 instance running inside an ECS cluster with VPC.
On the instance, I need to run a ECS task that needs access to DynamoDB.
When I try running the same task using Fargate, I can use the assignPublicIp = 'ENABLED' option to allow my task to have access to other AWS services, and everything works fine.
However, the assignPublicIp option is not available for the EC2 launch type, and I cannot figure out how to allow my EC2 instance have access to other AWS services.
I read the AWS docs and followed guides like this one to setup VPC endpoint for DynamoDB.
I also made sure that there aren't any network access restrictions by making sure that inbound/outbound rules for my NACL and security group for the VPC are wide open (at least for the sake of testing).
Here is how the rules look like, for both NACL and my security group:
Finally, I used the VPC > Reachability Analyzer to check if AWS can detect any problems regarding the connection path between my EC2 instance and DynamoDB, but the analysis reported a Reachable status.
It basically told me that there was no issues regarding establishing a connection along the following path:
Network interface for my EC2 instance (source)
Security group for the VPC
NACL for the VPC
Route table for the VPC
which includes the following route added by the VPC endpoint for DynamoDB
Destination: pl-02cd2c6b (com.amazonaws.us-east-1.dynamodb, 3.218.182.0/24, 3.218.180.0/23, 52.94.0.0/22, 52.119.224.0/20)
Target: the endpoint ID (e.g., vpce-foobar)
VPC endpoint for DynamoDB (destination)
Despite AWS telling me that I have a "Reachable" status, I still think it might be a network reachability problem, because when I run the task, the script I am running gets stuck right after it makes a GetItem call to DynamoDB.
If it was a permission error or an invalid parameter issue, I would get an error immediately, but everything just "hangs" there, until the task eventually times out.
Any pointers on what I might be missing here, or other workarounds would be very appreciated.
Thank you.
EDIT 1 (2021/02/13):
So I went back to the AWS docs to see if I had missed anything in setting up the VPC endpoints. I originally had one setup for DynamoDB, but since I also need to use S3 in my service, I went ahead and setup a Gateway VPC Endpoint for S3 too (I also wanted to see if the issue I am having is a generic network problem, or specific to DynamoDB).
Then, I made some changes to my script to try to make a call to S3 (to get the bucket's location, for simplicity) as the very first thing to do. I knew that the call would end up timing out, so I wanted to trigger the error immediately upon starting my script execution.
I waited until my task would eventually fail because of the timeout, and this time I noticed something interesting.
Here is the error logs I got when the task failed:
The IP address that my task was trying to reach was 52.85.146.194:443.
And here are the IP addresses that I found in the managed prefix list for S3, which I found in the VPC console:
The IP address I got the timeout error from is not on the list. Could this be a hint to the cause of the issue? Or am I missing something and there is actually nothing wrong with that?
I have set up an EKS cluster and I am trying to connect application pod to ElastiCache endpoint. I put both in same VPC and configured in/out security groups for them. Unfortunately while trying to telnet from pod to cache endpoint, it says "xxx.yyy.zzz.amazonaws.com: Unknown host". Is it even possible to make such a connection?
Yes, if the security groups allow connectivity then you can connect from EKS pods to Elasticache. However, be aware that the DNS name may not resolve for some time (up to around 15 minutes) after you launch the Elasticache instance/cluster.
I found an answer in the issue from cortexproject(monitoring tool based on Grafana stack).
Solved it using "addresses" instead "host" with the address of my memcached. It worked.
PS: "addresses" option isn't documented in the official documentation.
It has to view like this:
memcached_client:
addresses: memcached.host
I know this issue has been already discussed before , Yet I feel my question is a bit different.
I'm trying to figure out how am I to enable access to the Kibana over the self manged AWS elastic search which I have in my AWS account .
Could be that what am I about to say here is inaccurate or complete nonsense .
I am pretty novice in the whole AWS VPC wise section and to ELK stuck.
Architecture:
Here is the "Architecture":
I have a VPC.
Within the VPC I have several sub nets.
Each server sends it's data to the elastic search using log stash which runs on the server itself. For simplicity lets assume I have a single server.
The elastic search https url which can be found in the Amazon console is resolved to an internal IP within the sub net that I have defined.
Resources:
I have found the following link which suggest to use one of two option:
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/
Solutions:
Option 1: resource based policy
Either to allow resource based policy for elastic search by introducing condition which specify certain IP address.
This was discussed in the following thread but unfortunately did not work for me.
Proper access policy for Amazon Elastic Search Cluster
When I try to implement it in the Amazon console, Amazon notifies me that because I'm using Security group , I should resolve it by using security group.
Security group rules:
I tried to set a rule which allows my personal computer(Router) public IP to access Amazon elastic search ports or even opening all ports to my public IP.
But that didn't worked out.
I would be happy to get a more detailed explanation to why but I'm guessing that's because the elastic search has only internal IP and not public IP and because it is encapsulated within the VPC I am unable to access it from outside even if I define a rule for a public IP to access it.
Option 2: Using proxy
I'm decline to use this solution unless I have no other choice.
I'm guessing that if I set another server with public and internal IP within the same subnet and VPC as that of the elastic search , and use it as a proxy, I would be then be able to access this server from the outside by defining the same rules to the it's newly created security group . Like the article suggested.
Sources:
I found out of the box solution that some one already made for this issue using proxy server in the following link:
Using either executable or docker container.
https://github.com/abutaha/aws-es-proxy
Option 3: Other
Can you suggest other solution? Is it possible to use Amazon Load balancer or Amazon API gateway to accomplish this task?
I just need proof of concept not something which goes into production environment.
Bottom line:
I need to be able to aceess Kibana from browser in order to be able to search elastic search indexes.
Thanks a lot
The best way is with the just released Cognito authentication.
https://aws.amazon.com/about-aws/whats-new/2018/04/amazon-elasticsearch-service-simplifies-user-authentication-and-access-for-kibana-with-amazon-cognito/
This is a great way to authenticated A SINGLE USER. This is not a good way for the system you're building to access ElasticSearch.
I am deploying a laravel installation in AWS, everything runs perfectly when I allow it to recieve all inbound traffic (EC2>Network&Security>Security Groups>Edit inbound rules.), if I turn off inbound traffic and limit it to an IP it doesnt load the webpage it gives me this error:
PDO Exception SQLSTATE[HY000] [2002] Connection timed out
However for security reasons I dont want this setup like this, I dont want anyone being able to even try to reach my webapp. Everything is being hosted in AWS, I dont have any external entities, its running in RDS and EC2. I added en elastic IP address and whitelisted it, but that didnt work either. I followed every step in this tutorial : http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-laravel-tutorial.html#php-laravel-tutorial-generate
Environmental variables are working as well as dependencies, well.. pretty much everything unless I restrict inbound traffic as I mentioned.
How do I whitelist AWS own instance then to make this work with better security?
Thank you!
I think part of this answer is what you may be looking for.
You should enable inbound access from the EC2 security group associated with your EC2 instance, instead of the EC2 IP address.
More than just adding an elastic IP address to your AWS instance you need to do two more things.
Assign the elastic IP to your AWS instance ( yes is not the same as just adding it to the instance, you must specify )
White list the internal IP that it generates once you link it to your app.
?????
Profit
I have an RDS instance with a URL that was provided by Amazon. (This URL has an IP that's associated with (of course)).
To make connecting to the DB easier I made a redirect from my domain like this: "db.myDomain.com" to the IP of the DB Instance.
For a week it all worked fine, but then, suddenly, it stopped working. After searching for a few hours, I have realized that the IP I was redirecting to was not the same as the IP of the instance.
This made me think that maybe the IPs on RDS are dynamic and the only way to access the DB is with the URL provided by Amazon.
Is this correct? If so, is there away to redirect from one URL to another?
Yes, your observation about the dynamic nature of the IPs for RDS is correct and it is the anticipated behaviour of the Service. Always use the URL provided for RDS instance to access it RDS instance(s).
For most of the use cases, you don't to do a redirect to access; as the DNS name would go inside a config file / connection string. If you still need a friendly name - you may use the Route53 to create an alias. Here is a documentation link from AWS to accomplish that [ https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-to-rds-db.html ] - it is easier & convenient.
For RDS instance, the DNS name is not changed, but IP address will be changed in some case, especially when you enable Multi-AZ (multiple available zone), the RDS instance will be switched to other available zone with different IP address when AWS found any fails in it.
So in your application, you can't fix the IP address for your database accessing, always set DNS (domain name) to access your database.