This is how I am trying to retrieve the external IP address of vms:
{{hostvars[inventory_hostname]['ansible_env'].SSH_CONNECTION.split(' ')[2]}}
Here is another way to see all IP addresses associated with a node:
ansible -m setup -i hosts -u
results:
ansible [hostname (or) hostgroup] -m setup -i hosts -u [user name] | grep SSH
"SSH_CLIENT": "<ip a> 57894 22",
"SSH_CONNECTION": "<ip a> 57894 <internal ip> 22",
"SSH_TTY": "/dev/pts/0",
What is getting returned using the first snippet is the 2nd IP i.e VM's internal IP.
How can ansible pull VM's external IP?
Also, what is IP a? It's certainly not the VM's external IP address.
I believe 'ansible_host' variable would do the trick. The playbook below outputs the ip address of the vm.
- name: Show vm's ip
hosts: gcp
tasks:
- debug:
msg: "{{ansible_host}}"
Related
I have an ec2 instance with docker compose installed, running a single container. I have the same setup replicated locally on my machine.
using nc -v **host** 5432 results in:
From my machine > success
From inside a docker container running on my machine > success
From inside the ec2 instance > success
From inside a docker container running on the ec2 > Host is unreachable
I'm guessing there's something I'm missing in the ec2's docker config if anyone can point me in the right direction.
This is the docker_boot.service file
Description=docker boot
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/ubuntu/metabase
ExecStart=sudo /usr/bin/docker-compose -f /home/ubuntu/metabase/docker-compose.yml up -d --remove-orphans
[Install]
WantedBy=multi-user.target
The reason behind this is your Docker container is trying to resolve your RDS DNS using public DNS rather than private one.
For a quick workaround, I think you can nslookup your RDS DNS and take 1 of its IPv4 addresses. Then, use that single IPv4 as your host.
nslookup <ID>.rds.amazonaws.com
For a clean workaround, you need to adjust your Docker container DNS configuration into your VPC internal DNS IPv4 address. Using --dns, you can quickly adjust this and you can add more DNS if your application is trying to reach other services as well.
docker run --name app --dns=169.254.169.253 -p 80:5000 -d app
References:
https://docs.docker.com/config/containers/container-networking/
https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html
Create an EC2 instance and file system. I am trying to mount the file system, for which I use the following command:
sudo mount -t nfs4 fs-0d06d36f390aeXXXX.efs.us-east-2.amazonaws.com:/myDir
And I am getting this error:
mount.nfs4: Failed to resolve server fs-0d06d36f390aeXXXX.efs.us-east-2.amazonaws.com: No address associated with hostname
Could someone guide me to solve this?
You can solve this issue by using following command.
first you need to nslookup nfsdomain from working machine and take the IP address
nslookup fs-7dsqsssscas5e11sefs.eu-central-1.amazonaws.com
Server: 127.0.0.51
Address: 127.0.0.51#51
Non-authoritative answer:
Name: fs-7dsqsssscas5e11sefs.eu-central-1.amazonaws.com
Address: 10.143.27.147
Add the IP address into /etc/hosts with nfsdoamin
vi /etc/hosts
10.143.27.147 fs-7dsqsssscas5e11sefs.eu-central-1.amazonaws.com
and try to mount with following command
sudo mount -t nfs4 fs-7dsqsssscas5e11sefs.eu-central-1.amazonaws.com:/ /mnt
We have a Spark master node on a Google Cloud Dataproc cluster, which we want to connect by Hostname and NOT the Internal IP.
We want to connect/ping these VMs from one another.
Rationale: When we drop/create any of the VMs/Clusters, the internal IP changes, but we dont want to change bunch of connection strings everytime.
GCLOUD command line lists the Master node VM:
vn524i0#m-c02zf1nylvdt ~$ gcloud compute instances list | grep anvil
anvil-dataproc-m us-east1-a custom-16-65536 10.22.162.40 RUNNING
From another GCP VM (in same region) when I try to ping the VM using the internal IP, I am able to (so ICMP is enabled):
vn524i0#m-c02zf1nylvdt ~$ ping 10.22.162.40 -c 1
PING 10.22.162.40 (10.22.162.40): 56 data bytes
64 bytes from 10.22.162.40: icmp_seq=0 ttl=56 time=140.232 ms
--- 10.22.162.40 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
But when I try to ping by hostname, I get Unknown Host error:
vn524i0#m-c02zf1nylvdt ~$ ping anvil-dataproc-m
ping: cannot resolve anvil-dataproc-m: Unknown host
I've followed the global & Zonal DNS guide and used the hostname pattern suggested by Google as per this page.
zonal DNS hostname style:
vn524i0#m-c02zf1nylvdt ~$ ping anvil-dataproc-m.us-east1-a.c.PROJECT_NAME.internal -c 1
ping: cannot resolve anvil-dataproc-m.us-east1-a.c.PROJECT_NAME.internal: Unknown host
global DNS hostname style:
vn524i0#m-c02zf1nylvdt ~$ ping anvil-dataproc-m.c.PROJECT_NAME.internal -c 1
ping: cannot resolve anvil-dataproc-m.c.PROJECT_NAME.internal: Unknown host
Any guidance on how to connect / ping/nslookup by hostname and not depend upon internal IP please?
Right now, I am setting up the AWS server with tomcat docker.
I am successfully map with Domain name provided by Namecheap but unfortunately, my website still can access by the public ip address.
I want to redirect the ip address to domain name.
I tried set the hosts file but it did not work.
For example, 127.0.0.1 www.abc.com
You need to map the instance public IP in your Domain provider setting, not the local IP of Docker.
Publish docker port to host docker run -dit -p HOST_PORT:CONTAINER_PORT your_image
Allow port in the security group of instance
update DNS to point to the public IP address of the instance
Update:
You need to run Nginx in your EC2 machine, then add following config in the Nginx config.
Try this on the second block:
server {
listen 80;
server_name YOUR_INSTANCE_PUBLIC_IP;
return 301 $scheme://www.abc.com$request_uri;
}
redirect 127.0.0.1 to www.abc.com not possible, as 127.0.0.1 is local IP and not accessible from outside of EC2.
expose the docker container port:
docker run --name awsContainer -p 80:8080
-p 80(port aws instance):8080(tomcat container port)
two instances:
1.- Orion with 192.168.x.1, public like 130.a.b.c
2.- Keystone with 192.168.x.2. Port 8000 opened and tested from localhost
3.- instances have the same routing rule group.
I edit my security group rules adding port 8000 with a cidr 192.168.x.2/32. Now i test it with telnet from my computer:
telnet 130.a.b.c 8000
result: Connection time out.
i'm wrong? how can i connect to port 8000 from my computer to my second instance using the public Ip (configured in first instance) ? Or need a second public Ip?
There are many ways to do this: IPForwarding with iptables, haproxy, etc.
However, I thinK that the easiest way to do this would be SSH Port forwarding in your host with the public IP:
ssh -f -N -o ServerAliveInterval=30 -L 0:8000:192.168.x.2:8000 $YOUR_USER#192.168.x.1:8000
-L 0:8000:192.168.x.2:8000 means that It'll listen every network interface (0:8000) and will send every query to 192.168.x.2:8000
If you don't have a password to your user or ssh is not configured to accept passwords, you could consider either adding a new authorized key (so you can locally login) or connecting your public IP using -A so your credentials can be forwarded:
*ssh -A -i $PRIVATE_KEY_FILE $YOUR_USER#130.a.b.c