Using Fabric on cascated host environment - fabric

I'm trying to use Fabric to connect to several nodes (Ubuntu VMs), but I can not reach all nodes from VM where Fabric is installed. Instead I need to go first to one specific node called entry point and from this entry point to another VM where from that all VMs are reachable. Please see figure below. Any suggestions on how to use fabric to achieve this?
Network Architecture

With more than 1 "jump" the easiest way to achieve it is by letting Fabric read your ~/.ssh/config's ProxyCommand directives (or equivalent).
Have a look at documentation.
In your configuration file you should have something like the following:
Host entryPoint
HostName your-entrypoint-hostname-or-ipaddress
Host VM0
ProxyCommand ssh -q -W %h:%p entryPoint
Host VM1 VM2 VMN
ProxyCommand ssh -q -W %h:%p VM0
For a single jump you may consider using env.gateway instead.
A slight variation, using nc:
Host entryPoint
HostName your-entrypoint-hostname-or-ipaddress
Host VM0
ProxyCommand ssh -q entryPoint nc -q0 %h:%p
Host VM1 VM2 VMN
ProxyCommand ssh -q VM0 nc -q0 %h:%p

Related

Cannot connect to RDS from inside a docker container, I can from the host, and I can from a local docker container

I have an ec2 instance with docker compose installed, running a single container. I have the same setup replicated locally on my machine.
using nc -v **host** 5432 results in:
From my machine > success
From inside a docker container running on my machine > success
From inside the ec2 instance > success
From inside a docker container running on the ec2 > Host is unreachable
I'm guessing there's something I'm missing in the ec2's docker config if anyone can point me in the right direction.
This is the docker_boot.service file
Description=docker boot
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/ubuntu/metabase
ExecStart=sudo /usr/bin/docker-compose -f /home/ubuntu/metabase/docker-compose.yml up -d --remove-orphans
[Install]
WantedBy=multi-user.target
The reason behind this is your Docker container is trying to resolve your RDS DNS using public DNS rather than private one.
For a quick workaround, I think you can nslookup your RDS DNS and take 1 of its IPv4 addresses. Then, use that single IPv4 as your host.
nslookup <ID>.rds.amazonaws.com
For a clean workaround, you need to adjust your Docker container DNS configuration into your VPC internal DNS IPv4 address. Using --dns, you can quickly adjust this and you can add more DNS if your application is trying to reach other services as well.
docker run --name app --dns=169.254.169.253 -p 80:5000 -d app
References:
https://docs.docker.com/config/containers/container-networking/
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html

How to connect polkadot.js to the Parastate local node

I'm trying to connect my local node here:
https://polkadot.js.org/apps/
I go to the top left coner and choosing "local node "
However I do not understand where to enter ip and port for my node.
How to do that?
Enable ssh tunneling to the server.
The first step is to enable ssh tunnel from the local host to the remote server where your parastate node has been hosted . The below command enable ssh tunnel to remote host xx.xx.xx.xx on port 9944 which is needed for polakdot.js.org endpoint:
ssh -N -L 9944:127.0.0.1:9944 -i C:\Users\sshkey\privatekey.ppk root#xx.xx.xx.xx
In case you are not using ssh key and would like to connect using traditional method by supplying password, use the below option
ssh -N -L 9944:127.0.0.1:9944 root#xx.xx.xx.xx
The below command enable ssh tunnel to remote host xx.xx.xx.xx on port 9944 which is needed for polakdot.js.org endpoint
ssh -N -L 9944:127.0.0.1:9944 -i C:\Users\sshkey\privatekey.ppk root#xx.xx.xx.xx
In case you are not using ssh key and would like to connect using traditional method by supplying password , use the below option
ssh -N -L 9944:127.0.0.1:9944 root#xx.xx.xx.xx
Enter the password when prompted/
You will just have empty hanging window . Don’t panic you are all good.

ssh to docker container hosted on EC2

I want to run a docker container on EC2 and also I need to ssh into the container for debugging purposes. I have 2 ports open for ssh 22 and 8022 on my EC2 instance(security group applied). The problem is when I want to bind 22 port of my docker container to port 8022 then it tells address already in use. And the address is used by sshd program. If I kill the process then I cant ssh to the instance from my localhost. How can I overcome this deadlock?
As mentioned in the comments, you don't need to start ssh inside the container in order to go inside the container. You can use the docker exec command to go inside the container after you ssh into the EC2 instance by running:
docker exec -it <container-name> bash
If you still want to ssh into the container directly, then you need to do the following:
Start the container and map port 22 inside to a free port outside;
docker run -p 2222:22 ...
After starting the container, exec into it and install ssh if not yet installed, and start the ssh service using something like systemctl start sshd
ssh into the container, by using the ec2 instance IP and the mapped port
ssh <container-user>#<ec2-instance-ip> -p 2222
This will connect to the ec2 instance and redirect you into the container due to the port mapping.

How to setup SSH tunneling in AWS

I have a RDS instance with mysql database and which can only be accessed by an ec2 instance running in AWS. Now i want to access my RDS instance from local machine using SSH tunneling. I searched a lot on the net but none of the solutions worked. Can any one please tell me how to do it step by step with working solution.
Any help will be highly appreciated!
I tried to run -
ssh -i myNewKey.pem -N -L 3306:myredinstance:3306 ec2-user#myec2-instance.
mysql -u dbauser -p -h 127.0.0.1 on mysql-js utility and it gave me error. Please see below :-
You can do it by setting up a ssh tunel
ssh -i /path/to/key -N -L 3306:an_rds_endpoint:3306 user#yourserver.com
Then connect locally
mysql -u myuser -p -h 127.0.0.1

Block docker access to specific IP

I'd like my EC2 instance to have IAM-based permissions, but don't want the docker containers on that instance to have the same permissions. I believe it should be sufficient to block access to the magic IP 169.254.169.254. Is it sufficient to run:
iptables -I DOCKER -s 169.254.169.254 -j DROP
Do I also need to configure my docker daemon with --icc=false or --iptables=false?
Finally got this working, you need to add this rule on the host machine:
1) Drop docker bridge packets when outbound to 169.254.169.254 port 80 or 443.
sudo iptables -I FORWARD -i docker0 -d 169.254.169.254 \
-p tcp -m multiport --dports 80,443 -j DROP
Now, if I try to connect inside the container:
$ sudo docker run -it ubuntu bash
root#8dc525dc5a04:/# curl -I https://www.google.com
HTTP/1.1 200 OK
root#8dc525dc5a04:/# curl -I http://169.254.169.254/
# <-- hangs indefinitely, which is what we want
Connections to the special IP still work from the host machine, but not from inside containers.
Note: my use case is for Google Compute Engine and prevents Docker containers from accessing the metadata server on 169.254.169.254, while still allowing DNS and other queries against that same IP. Your mileage may vary on AWS.
I would recommend the following variation on the accepted answer:
sudo iptables \
--insert DOCKER-USER \
--destination 169.254.169.254 \
--jump REJECT
The reason for this is that the above command adds the rule to the DOCKER-USER chain which Docker is guaranteed not to modify.
Sources:
https://ops.tips/blog/blocking-docker-containers-from-ec2-metadata/
https://docs.docker.com/network/iptables/