Accessing RDS from within a Docker container not getting through security group? - amazon-web-services

I'm attempting to run a webserver that uses an RDS database with EC2 inside a docker container.
I've setup the security groups so the EC2 host's role is allowed to access the RDS and if I try to access it from the host machine directly everything works correctly.
However, when I run a simple container on the host and attempt to access the RDS, it get's blocked as if the security group weren't letting it through. After a bunch of trial and error it seemed that indeed the containers requests aren't appearing to come from the EC2 host so the firewall says no.
I was able to work around this in the short-run by setting --net=host on the docker container, however this breaks a lot of great docker networking functionality like being able to map ports (ie, now I need to make sure each instance of the container listens on a different port by hand).
Has anyone found a way around this? It seems like a pretty big limitation to running containers in AWS if you're actually using any AWS resources.

Yes, containers do hit the public IPs of RDS. But you do not need to tune low-level Docker options to allow your containers to talk to RDS. The ECS cluster and the RDS instance have to be in the same VPC and then access can be configured through security groups. The easiest way to do this is to:
Navigate to the RDS instances page
Select the DB instance and drill in to see details
Click on the security group id
Navigate over to the Inbound tab and choose Edit
And ensure there is a rule of type MySQL/Aurora with source Custom
When entering the custom source, just start typing in the name of the ECS cluster and the security group name will be auto-completed for you
This tutorial has screenshots that illustrate where to go.
Full disclosure: This tutorial features containers from Bitnami and I work for Bitnami. However the thoughts expressed here are my own and not the opinion of Bitnami.

Figured out what was happening, posting here in case it helps anyone else.
Requests from within the container were hitting the public ip of the RDS rather than the private (which is how the security groups work). It looks like the DNS inside the docker container was using the 8.8.8.8 google dns and that wouldn't do the AWS black magic of turning the rds endpoint into the private ip.
So for instance:
DOCKER_OPTS="--dns 10.0.0.2 -H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock -g /mnt/docker"

The inbound rule for the RDS should be set to the private IP of the EC2 instance rather than the public IPv4.

As #adamneilson mentions, setting the Docker options are your best bet. Here is how to discover your Amazon DNS server on the VPC. Also the section Enabling Docker Debug Output in the Amazon EC2 Container Service Developer Guide Troubleshooting mentions where the Docker options file is.
Assuming you are running a VPC block of 10.0.0.0/24 the DNS would be 10.0.0.2.
For CentOS, Red Hat and Amazon:
sed -i -r 's/(^OPTIONS=\")/\1--dns 10.0.0.2 /g' /etc/sysconfig/docker
For Ubuntu and Debian:
sed -i -r 's/(^OPTIONS=\")/\1--dns 10.0.0.2 /g' /etc/default/docker

When I tried to connect to AWS RDS in inside of docker container, I got "Access denied for user 'username'#'xxx.xx.xxx.x' (using password: YES)" error.
To solve this issue, I did below two ways:
I created new user and assigned grant.
$ CREATE USER 'newuser'#'%' IDENTIFIED BY 'password';
$ GRANT ALL ON newuser#'%' IDENTIFIED BY 'password';
$ FLUSH PRIVILEGES;
Added global DNS address 8.8.8.8 into docker container when run docker, so that the docker container can resolve IP address of AWS RDS from domain name.
$ docker run --name backend-app --dns=8.8.8.8 -p 8000:8000 -d backend-app
Then I connected from inside of docker container to AWS RDS, successfully.
Note: Firstly, I tried second way. But I didn't solve the connection problem. When I tried both two ways, I was success.

Related

Redis cli unable to connect to AWS ElastiCache due to redis memory usage but application still able to communicate

Is it possible that redis cli is given less priority to connect when memory consumption is high but application is allowed to communicate?
I am unable to connect via cli so can't check anything. Also, don't have the redis server access.
We connect without authentication -
redis-cli -h <hostname>
I ran a process which inserted too many redis keys and that caused this situation. Now, I am not able to delete those keys. I am afraid, the other necessary keys would get evicted as old and system would start doing processing for things not available in redis.
Not able to connect via telnet as well.
Is it possible to connect via a Python script at this point?
If I restart the Java application, will it be able to connect anymore?
Will redis server access via AWS console be able to delete any of the key patterns? I don't have the access currently, so not able to confirm myself. Never used via it also.
Update
Following are graphs taken from AWS console, over the last 1 day since this issue happened -
Update
I went through the FAQ of elasticache, but did not find any mention of being able to manage the data at key value pair level or presence of some special privilege users like root in case of MySql which is able to connect when no other users are able to connect.
All I found is cluster level management capabilities.
From the question it is not clear whether the redis-cli -h <host> command you're running is from within the EC2 or it is from you local machine (Outside AWS VPC).
Accessing from EC2
You will have to ensure following points:
Both the EC2 instance and Redis Instance are on the same VPC.
The security group on EC2 should be allowing port 6379 (It should already be if an application is able to access Redis on the same EC2)
Accessing from outside Amazon VPC
This is not something that's preconfigured and I will suggest that you go through the Accessing Your Cluster docs under the heading "How to Access ElastiCache Resources from Outside AWS".
First, check the connectivity from source(Generally Ec2 instance) to Target (Redis Host).
We can use simple command for that like
#curl -v hostIP(or dnsName):Port
#curl -v myredis.com:6379 or curl -v 192.17.37.42:6379
If you see "Connected" then there is no issue with the network otherwise you have to look into network configurations like firewalls.
Next, you can connect to Redis using redis-cli with the below command:
#redis-cli -h myredis.com -p 6379

Query regarding SQL Workbench access to Redshift cluster endpoint IP Whitelisting

I use a dongle/WiFi to use internet and hence my Public IP is dynamic (changes everyday).
While trying to access SQL Workbench, the IP of the end user has to be added in the default security group under the Network & Security tab of the EC2 dashboard.
Everyday I have to follow this process religiously before starting work on SQL Workbench.
Can anyone guide me to a permanent fix on this?
Permanent fix? No.
But a simple way to handle it? Yes!
You can run this script on your computer:
IP=`curl -s http://whatismyip.akamai.com/`
aws ec2 authorize-security-group-ingress --group-name "Redshift-SG" --protocol tcp --port 5439 --cidr $IP/32
It will add your current IP address to the Security Group.
After a while, the security group will fill up, so you'll need to empty it out and start again. (You could automate that too!)

Connect to Neptune on AWS from local machine

I am trying to connect to Neptune DB in AWS Instance from my local machine in office, like connecting to RDS from office. Is it possible to connect Neptune db from local machine? Is Neptune db publicly available? Is there any way a developer can connect Neptune db from office?
Neptune does not support public endpoints (endpoints that are accessible from outside the VPC). However, there are few architectural options using which you can access your Neptune instance outside your VPC. All of them have the same theme: setup a proxy (EC2 machine, or ALB, or something similar, or a combination of these) that resides inside your VPC, and make that proxy accessible from outside your VPC.
It seems like you want to talk to your instance purely for development purposes. The easiest option for that would be to spin up an ALB, and create a target group that points to your instance's IP.
Brief Steps (These are intentionally not in detail, please refer to AWS Docs for detailed instructions):
dig +short <your cluster endpoint>
This would give you the current master's IP address.
Create an ALB (See AWS Docs on how to do this).
Make your ALB's target group point to the IP Address obtained for step #1. By the end of this step, you should have an ALB listening on PORT-A, that would forward requests to IP:PORT, where IP is your database IP (from Step 1) and PORT is your database port (default is 8182).
Create a security group that allows inbound traffic from everywhere. i.e. Inbound TCP rule for 0.0.0.0 on PORT-A.
Attach the security group to your ALB
Now from your developer boxes, you can connect to your ALB endpoint at PORT-A, which would internally forward the request to your Neptune instance.
Do checkout ALB docs for details around how you can create it and the concepts around it. If you need me to elaborate any of the steps, feel free to ask.
NOTE: This is not a recommended solution for a production setup. IP's used by Neptune instances are bound to change with failovers and host replacements. Use this solution only for testing purposes. If you want a similar setup for production, feel free to ask a question and we can discuss options.
As already mentioned you can't access directly outside your VPC.
The following link describes another solution using a SSH tunnel: connecting-to-aws-neptune-from-local-environment.
I find it much easier for testing and development purpose.
You can create the SSH tunnel with Putty as well.
Reference: https://github.com/M-Thirumal/aws-cloud-tutorial/blob/main/neptune/connect_from_local.md
Connect to AWS Neptune from the local system
There are many ways to connect to Amazon Neptune from outside of the VPC, such as setting up a load balancer or VPC peering.
Amazon Neptune DB clusters can only be created in an Amazon Virtual Private Cloud (VPC). One way to connect to Amazon Neptune from outside of the VPC is to set up an Amazon EC2 instance as a proxy server within the same VPC. With this approach, you will also want to set up an SSH tunnel to securely forward traffic to the VPC.
Part 1: Set up a EC2 proxy server.
Launch an Amazon EC2 instance located in the same region as your Neptune cluster. In terms of configuration, Ubuntu can be used. Since this is a proxy server, you can choose the lowest resource settings.
Make sure the EC2 instance is in the same VPC group as your Neptune cluster. To find the VPC group for your Neptune cluster, check the console under Neptune > Subnet groups. The instance's security group needs to be able to send and receive on port 22 for SSH and port 8182 for Neptune. See below for an example security group setup.
Lastly, make sure you save the key-pair file (.pem) and note the directory for use in the next step.
Part 2: Set up an SSH tunnel.
This step can vary depending on if you are running Windows or MacOS.
Modify your hosts file to map localhost to your Neptune endpoint.
Windows: Open the hosts file as an Administrator (C:\Windows\System32\drivers\etc\hosts)
MacOS: Open Terminal and type in the command: sudo nano /etc/hosts
Add the following line to the hosts file, replacing the text with your Neptune endpoint address.
127.0.0.1 localhost YourNeptuneEndpoint
Open Command Prompt as an Administrator for Windows or Terminal for MacOS and run the following command. For Windows, you may need to run SSH from C:\Users\YourUsername\
ssh -i path/to/keypairfilename.pem ec2-user#yourec2instanceendpoint -N -L 8182:YourNeptuneEndpoint:8182
The -N flag is set to prevent an interactive bash session with EC2 and to forward ports only. An initial successful connection will ask you if you want to continue connecting? Type yes and enter.
To test the success of your local graph-notebook connection to Amazon Neptune, open a browser and navigate to:
https://YourNeptuneEndpoint:8182/status
You should see a report, similar to the one below, indicating the status and details of your specific cluster:
{
"status": "healthy",
"startTime": "Wed Nov 04 23:24:44 UTC 2020",
"dbEngineVersion": "1.0.3.0.R1",
"role": "writer",
"gremlin": {
"version": "tinkerpop-3.4.3"
},
"sparql": {
"version": "sparql-1.1"
},
"labMode": {
"ObjectIndex": "disabled",
"DFEQueryEngine": "disabled",
"ReadWriteConflictDetection": "enabled"
}
}
Close Connection
When you're ready to close the connection, use Ctrl+D to exit.
Hi you can connect NeptuneDB by using gremlin console at your local machine.
USE THIS LINK to setup your local gremlin server, it works for me gremlin 3.3.2 version
Only you have to update the remote.yaml as per your url and port

How to connect between two aws instances

I have an aws instance which hold Gitlab app and I have another for holding database, how can I achieve the connection between with two instances?
If the second instance is only there to store a database (and it's something like postgres or mysql) I'd recommend using RDS for it instead. It sets up the database in a way where you can whitelist the security group of the EC2 instance (your gitlab app) and provides dns and automatic backups/replication (if you enable multi-AZ).
This guide is a good place to start:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_GettingStarted.CreatingConnecting.PostgreSQL.html
Make sure your security group allows your instances to reach one another by modifying the inbound connection to reach the host on the required ports.
Start your database on the one instance
Your app should be able to connect to the database onces its up
For psql for example you would do something like
psql -h <DATABASE AWS INSTANCE IP> -p <port> -U <username> -W <password> <database>

GCE/GKE NAT gateway route kills ssh connection

im trying to setUp a NAT Gateway for Kubernetes Nodes on the GKE/GCE.
I followed the instructions on the Tutorial (https://cloud.google.com/vpc/docs/special-configurations chapter: "Configure an instance as a NAT gateway") and also tried the tutorial with terraform (https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway)
But at both Tutorials (even on new created google-projects) i get the same two errors:
The NAT isn't working at all. Traffic still outgoing over nodes.
I can't ssh into my gke-nodes -> timeout. I already tried setting up a rule with priority 100 that allows all tcp:22 traffic.
As soon as i tag the gke-node-instances, so that the configured route applies to them, the SSH connection is no longer possible.
You've already found the solution to the first problem: tag the nodes with the correct tag, or manually create a route targeting the instance group that is managing your GKE nodes.
Regarding the SSH issue:
This is answered under "Caveats" in the README for the NAT Gateway for GKE example in the terraform tutorial repo you linked (reproduced here to comply with StackOverflow rules).
The web console mentioned below uses the same ssh mechanism as kubectl exec internally. The short version is that as of time of posting it's not possible to both route all egress traffic through a NAT gateway and use kubectl exec to interact with pods running on a cluster.
Update # 2018-09-25:
There is a workaround available if you only need to route specific traffic through the NAT gateway, for example, if you have a third party whose service requires whitelisting your IP address in their firewall.
Note that this workaround requires strong alerting and monitoring on your part as things will break if your vendor's public IP changes.
If you specify a strict destination IP range when creating your Route in GCP then only traffic bound for those addresses will be routed through the NAT Gateway. In our case we have several routes defined in our VPC network routing table, one for each of our vendor's public IP addresses.
In this case the various kubectl commands including exec and logs will continue to work as expected.
A potential workaround is to use the command in the snippet below to connect to a node and use docker exec on the node to enter a container. This of course means you will need to first locate the node your pod is running on before jumping through the gateway onto the node and running docker exec.
Caveats
The web console SSH will no longer work, you have to jump through the NAT gateway machine to SSH into a GKE node:
eval ssh-agent $SHELL
ssh-add ~/.ssh/google_compute_engine
CLUSTER_NAME=dev
REGION=us-central1
gcloud compute ssh $(gcloud compute instances list --filter=name~nat-gateway-${REGION} --uri) --ssh-flag="-A" -- ssh $(gcloud compute instances list --filter=name~gke-${CLUSTER_NAME}- --limit=1 --format='value(name)') -o StrictHostKeyChecking=no
Source: https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples/gke-nat-gateway
You can use kubeip in order to assign IP addresses
https://blog.doit-intl.com/kubeip-automatically-assign-external-static-ips-to-your-gke-nodes-for-easier-whitelisting-without-2068eb9c14cd