Ansible, connecting to bastion server in AWS VPC, host unreachable - amazon-web-services

How can I connect to a bastion server in a AWS VPC using Ansible 2.x to perform a Docker swarm setup? I've seen this question and the official FAQ.
Already tried providing the following via --extra-vars:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#my.bastion.server.com"' or even using ansible.cfg with the parameter above, or even something like:
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=600s -J ec2-
user#my.bastion.dns.com
I tried a lot of combinations but I’m always getting this error message running a ping command in a playbook:
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the
host via ssh: ssh: connect to host 10.1.xx.xx port 22: Operation timed
out\r\n",
Probably worth mentioning that:
I’m able to connect to the private hosts in my VPC normally using ssh -J option, example: ssh -J user#my.bastion.server.com user#vpc.host.private.ip .
I’m using Ansible’s ec2.py dynamic inventory with ec2.ini configured to map the private ips for a given tag entry.

It was a ssh misconfiguration problem.
I was able to fix with the configuration with those parameters.
1) Ansible.cfg file
[ssh_connection]
ssh_args = -o ProxyCommand="ssh -W %h:%p -q $BASTION_USER#$BASTION_HOST" -o ControlPersist=600s
control_path=%(directory)s/%%h-%%r
pipelining = True
2) Ec2.ini file
[ec2]
regions = us-xxxx-x
destination_variable = private_ip_address
vpc_destination_variable = private_ip_address
3) Playbook Execution Command
export BASTION_USER=xxx-xxxx;
export BASTION_HOST=ec2-xx-xx-xx-xx.xxxxx.compute.amazonaws.com;
ansible-playbook -u ec2-xxxx \
-i ./inventory/ec2.py \
./playbook/ping.yml \
--extra-vars \
"var_hosts=tag_Name_test_private ansible_ssh_private_key_file=~/.ssh/my-test-key.pem" -vvv
And voila!

Related

How to connect polkadot.js to the Parastate local node

I'm trying to connect my local node here:
https://polkadot.js.org/apps/
I go to the top left coner and choosing "local node "
However I do not understand where to enter ip and port for my node.
How to do that?
Enable ssh tunneling to the server.
The first step is to enable ssh tunnel from the local host to the remote server where your parastate node has been hosted . The below command enable ssh tunnel to remote host xx.xx.xx.xx on port 9944 which is needed for polakdot.js.org endpoint:
ssh -N -L 9944:127.0.0.1:9944 -i C:\Users\sshkey\privatekey.ppk root#xx.xx.xx.xx
In case you are not using ssh key and would like to connect using traditional method by supplying password, use the below option
ssh -N -L 9944:127.0.0.1:9944 root#xx.xx.xx.xx
The below command enable ssh tunnel to remote host xx.xx.xx.xx on port 9944 which is needed for polakdot.js.org endpoint
ssh -N -L 9944:127.0.0.1:9944 -i C:\Users\sshkey\privatekey.ppk root#xx.xx.xx.xx
In case you are not using ssh key and would like to connect using traditional method by supplying password , use the below option
ssh -N -L 9944:127.0.0.1:9944 root#xx.xx.xx.xx
Enter the password when prompted/
You will just have empty hanging window . Don’t panic you are all good.

Error copying file from local to aws ec2 using scp

I am trying to copy local files to my ec2 instance.
When I run this command:
scp -i keypair.pem process.py ubuntu#ip-xx-xxx-xx-xxx.compute-1.amazonaws.com:~/.
I get this error:
ssh: Could not resolve hostname ip-xx-xxx-xx-xxx.compute-1.amazonaws.com: nodename nor servname provided, or not known
lost connection
When I run this code:
scp -i keypair.pem process.py ubuntu#ip-xx-xxx-xx-xxx:~/.
It stalls for ~ 1 minute then I get this error:
ssh: connect to host ip-xx-xxx-xx-xxx port 22: Operation timed out
lost connection
Any ideas how to resolve?
The easiest way to use scp is to start with an SSH command that already works:
ssh -i keypair.pem ec2-user#1.2.3.4
Then, modify it to use scp:
scp -i keypair.pem foo.txt ec2-user#1.2.3.4:/tmp/
The only things changed were:
ssh becomes scp
Insert source filename
Append :/target/

How to setup SSH tunneling in AWS

I have a RDS instance with mysql database and which can only be accessed by an ec2 instance running in AWS. Now i want to access my RDS instance from local machine using SSH tunneling. I searched a lot on the net but none of the solutions worked. Can any one please tell me how to do it step by step with working solution.
Any help will be highly appreciated!
I tried to run -
ssh -i myNewKey.pem -N -L 3306:myredinstance:3306 ec2-user#myec2-instance.
mysql -u dbauser -p -h 127.0.0.1 on mysql-js utility and it gave me error. Please see below :-
You can do it by setting up a ssh tunel
ssh -i /path/to/key -N -L 3306:an_rds_endpoint:3306 user#yourserver.com
Then connect locally
mysql -u myuser -p -h 127.0.0.1

Using Fabric on cascated host environment

I'm trying to use Fabric to connect to several nodes (Ubuntu VMs), but I can not reach all nodes from VM where Fabric is installed. Instead I need to go first to one specific node called entry point and from this entry point to another VM where from that all VMs are reachable. Please see figure below. Any suggestions on how to use fabric to achieve this?
Network Architecture
With more than 1 "jump" the easiest way to achieve it is by letting Fabric read your ~/.ssh/config's ProxyCommand directives (or equivalent).
Have a look at documentation.
In your configuration file you should have something like the following:
Host entryPoint
HostName your-entrypoint-hostname-or-ipaddress
Host VM0
ProxyCommand ssh -q -W %h:%p entryPoint
Host VM1 VM2 VMN
ProxyCommand ssh -q -W %h:%p VM0
For a single jump you may consider using env.gateway instead.
A slight variation, using nc:
Host entryPoint
HostName your-entrypoint-hostname-or-ipaddress
Host VM0
ProxyCommand ssh -q entryPoint nc -q0 %h:%p
Host VM1 VM2 VMN
ProxyCommand ssh -q VM0 nc -q0 %h:%p

Block docker access to specific IP

I'd like my EC2 instance to have IAM-based permissions, but don't want the docker containers on that instance to have the same permissions. I believe it should be sufficient to block access to the magic IP 169.254.169.254. Is it sufficient to run:
iptables -I DOCKER -s 169.254.169.254 -j DROP
Do I also need to configure my docker daemon with --icc=false or --iptables=false?
Finally got this working, you need to add this rule on the host machine:
1) Drop docker bridge packets when outbound to 169.254.169.254 port 80 or 443.
sudo iptables -I FORWARD -i docker0 -d 169.254.169.254 \
-p tcp -m multiport --dports 80,443 -j DROP
Now, if I try to connect inside the container:
$ sudo docker run -it ubuntu bash
root#8dc525dc5a04:/# curl -I https://www.google.com
HTTP/1.1 200 OK
root#8dc525dc5a04:/# curl -I http://169.254.169.254/
# <-- hangs indefinitely, which is what we want
Connections to the special IP still work from the host machine, but not from inside containers.
Note: my use case is for Google Compute Engine and prevents Docker containers from accessing the metadata server on 169.254.169.254, while still allowing DNS and other queries against that same IP. Your mileage may vary on AWS.
I would recommend the following variation on the accepted answer:
sudo iptables \
--insert DOCKER-USER \
--destination 169.254.169.254 \
--jump REJECT
The reason for this is that the above command adds the rule to the DOCKER-USER chain which Docker is guaranteed not to modify.
Sources:
https://ops.tips/blog/blocking-docker-containers-from-ec2-metadata/
https://docs.docker.com/network/iptables/