Vagrant/Puppet Connection Timeout (Obvious Fixes Attempted, Working Previously) - virtualbox

For some reason my Vagrant/Puppet instance stopped working out of the blue--I am no longer able to reach the VM from my host machine, despite no configuration or network changes.
Interestingly, the private network must be recognized as the browser is attempting to connect, however the request seems to be timing out when issued from OSX... Also worth noting, I have not installed any system updates at this time. The VM was working previously on 10.9.
Steps I have tried to resolve the issue:
vagrant destroy && vagrant up
Result: Vagrant loads properly, SSH works and apache is running with the proper result returned from ping 127.0.0.1
vagrant reload
Result: Same as above; VM reloads successfully, no change in network accessibility
sudo killall -HUP mDNSResponder
Result: No change in accessibility via the bound IP (10.0.0.100)
Port forwarding (explicit) vs "private_network" in vagrant file
Result: No change in accessibility via the bound IP (10.0.2.15)
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
Result: No change in accessibility via the bound IP, connection still times out
Vagrant File: http://pastebin.com/Hk8drWxF
Puppet File: http://pastebin.com/20Sp1m22
Any thoughts? Thanks!

Could this be an issue with netmask ? You specify 2 ips there : 10.0.0.100 and 10.0.2.15 if you're using default subnet (class C) you would end up on different subnets and be unable to speak directly to each other.

Related

SSH reverse port forward on EC2 aws instance

I used to have an ssh reverse port forwarding from my local computer to a remote EC2 AWS server on port 9999. (9999 for both machines.)
It used to work, but I created a new instance, and now it doesn't anymore. (Half working.) I'm not sure what I did to make it work back then... (Or something was changed.)
I have a process running on my computer on port 9999 and I want it to listen to the port 9999 of my EC2.
On my computer, curl "127.0.0.1:9999" is working.
But I want the code curl "ec2-xx-xx-xx-xx-xx.compute.amazonaws.com:9999" to work, for now it doesn't, giving me the error curl: (7) Failed to connect to ec2-xx-xx-xx-xx-xx.compute.amazonaws.com port 9999 after 59 ms: Connection refused
EC2 Security group is set to open 9999 on TCP for 0.0.0.0/0.
I create the forwarded port with the command :
ssh -R 9999:localhost:9999 -i "/home/example/XXX.pem" ubuntu#ec2-xx-xx-xx-xx-xx.compute.amazonaws.com
The connection ssh is established without errors.
Inside this ssh session I can even do curl "127.0.0.1:9999" inside and IT IS WORKING. Reaching my local computer.
But the request from the web isn't... (curl "ec2-xx-xx-xx-xx-xx.compute.amazonaws.com:9999" doesn't work...)
The path is good, if I install apache2 on port 80 curl "ec2-xx-xx-xx-xx-xx.compute.amazonaws.com:80" is working. (port 80 is added the same way to the security group)
I did sudo ufw disable, same problem.
Do you have an idea what I'm missing ?
EDIT : On the ssh -R forward session on the EC2 :
ubuntu#awsserver:~$ php -S 0.0.0.0:9999 -t .
[Wed Dec 14 16:35:11 2022] Failed to listen on 0.0.0.0:9999 (reason: Address already in use)
BUT, if I open a normal ssh session, I can run php -S 0.0.0.0:9999 -t ., the code curl "ec2-xx-xx-xx-xx-xx.compute.amazonaws.com:9999" is working everywhere as expected.
So... it is telling me that the port is already used (By the ssh -R command), but is closed when I try to connect to it... I don't get it.
The answer wasn't EC2/AWS related.
It's a security feature from SSH that I had to disable : GatewayPorts yes

Why does boot2docker, and port forwarding to my docker instances, periodically hang?

I have run the following command to forward sinatra and redis ports to my docker instance running in virtualbox on OSX:
ports=( 4567 6379 )
for port in "${ports[#]}"
do
echo "Forwarding $port"
VBoxManage modifyvm "boot2docker-vm" --natpf1 "tcpport$port,tcp,,$port,,$port"
done
However, periodically (like every 60 seconds) requests to either of these docker instances over my machine's public IP, originating from my machine, will hang for 40-60 seconds. However, my docker instance is healthy and I can connect directly via 192.168.59.103.
Thus, why would a connection such as:
redis-cli -h 192.168.1.1 PING
Periodically hang, but
redis-cli -h 192.168.59.103 PING
Always work? Is there some kind of bug in VirtualBox or boot2docker?
Moreover, during the periods where these requests hang, I have noticed that calls to
boot2docker ip
and
boot2docker ssh
Themselves both hang. I am running boot2docker 1.6.2 and VirtualBox 4.3.28 on OSX 10.10.3.
Additional debugging reveals that inter-instance connectivity is now impaired as well. I have linked two containers, and periodically HTTP requests between them will hang. I went so far as to run telnet container_name 4567, then I typed
GET /
Which, of course, is the most basic way to test a webserver. From inside of container_name, I ran curl http://localhost:4567/. The telnet request hung, but the curl http://localhost:4567/ returned immediately.
This is one of the main reasons we added the extra localhost only interface (192.168.59.103) - the virtual box bat port forwarding is woeful and very unreliable.

Can't connect to VM running Django

Using VirtualBox, I have a NAT enabled VM running Centos 7. The host OS is Windows 7. I can't seem to access the Django web server running inside the VM. What am I missing?
I have two port forwarding rules set for the Virtual Machine:
I start the Django web server on the guest OS with:
python manage.py runserver 0.0.0.0:8000
And I try to visit the webpage on the host OS at:
http://localhost:8000
Google Chrome gives me the error code ERR_CONNECTION_RESET.
The result of curl on the host OS:
[user#win7 ~ ]$ curl http://localhost:8000
curl: (56) Recv failure: Connection reset by peer
Here is the result of a netstat performed on the guest OS:
[user#vm ~ ]$ netstat -na | grep 8000
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
Here is the result of a netstat performed on the host OS (with Cygwin):
[user#win7 ~ ]$ netstat -na | grep 8000
TCP 0.0.0.0:8000 0.0.0.0:0 LISTENING
It is also worth mentioning that the SSH rule works. I can SSH into the machine with no problems.
This is not a solution, but a work-around for my problem. Maybe this will help anyone encountering a problem similar to mine, and just wants to be able to connect to their VM's webserver.
Since SSH was working, I figured I could access the webpage via a SSH Tunnel. The syntax for doing so via command line is:
ssh -L <local-port>:<remote-host>:<remote-port>
So in my situation, if I wanted to open a tunnel via command line I would do:
ssh -L 8000:127.0.0.1:8000
This would allow me to browse to http://localhost:8000 and access the website.
You can also do this via PuTTY, but I won't explain that here, so just Google for a guide.
The ssh tunnel is an OK work around, but the problem is almost certainly CentOS 7 which now uses firewalld rather than iptables to manager access. And, unlike iptables the default configuration is quite restrictive.
if
ps -ae | grep firewall
returns something like
602 ? 00:00:00 firewalld
your system is running firewalld, not iptables. They do not run together.
To correct your VM so you can access your django site from the host use the commands:
firewall-cmd --zone=public --add-port=8000/tcp --permanent
firewall-cmd --reload
Many thanks to pablo v in the post "Access django server on virtual Machine" for pointing this out.

LIST natpf rules in Virtualbox/Vagrant

I often get errors like this when running Vagrant:
VBoxManage: error: A NAT rule of this name already exists
VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component NATEngine, interface INATEngine, callee nsISupports
VBoxManage: error: Context: "AddRedirect(Bstr(strName).raw(), proto, Bstr(strHostIp).raw(), RTStrToUInt16(strHostPort), Bstr(strGuestIp).raw(), RTStrToUInt16(strGuestPort))" at line 1524 of file VBoxManageModifyVM.cpp
I'd like to remove all port forwarding rules before doing vagrant up, but I have trouble LISTING natpf rules. Is there any way to do it using vboxmanage or via some facilities in Vagrant?
Update: Vagrant version 1.3.4. I can replicate the problem as follows: start vm installation normal way (vagrant up) and force power off the vm during installation (this simulates e.g. failed install). Then the natpf1 port forwarding rule is left in the system. The only way to clean it up is like vboxmanage modifyvm #{vmid} --natpf1 delete rule_name, but you have to know the rule name beforehand... Furthermore, the rule stays there after vagrant destroy and it seems that my Ruby natpf1 clearing function present in Vagrant.configure is not ran, which means that the stale rule still clashes with "fresh" one that Vagrant attempts to create.
This got the job done for me:
VBoxManage showvminfo $VM_NAME --machinereadable | awk -F '[",]' '/^Forwarding/ { printf ("Rule %s host port %d forwards to guest port %d\n", $2, $5, $7); }'
I came here with the same problem; with your hint about deleting the rule I found that you can use the VirtualBox GUI to find the rules and delete them.
Of course, this only works when you are working on a machine with a GUI desktop.
Open the VirtualBox manager
Open the settings for the box in question (rmb -> settings, or the gear icon)
Select Network from the list on the left and open the Port Forwarding dialogue
From here you'll be able to directly remove the rules.
http://i.stack.imgur.com/6fQQc.png
Looking at the rules, it seems they just get a name that is equal to the port being set. So you can also look at the Vagrantfile, and search for a line like this:
db.vm.network :forwarded_port, guest: 5432, host: 5432
And guess that the name of the rule will be 5432. The name of the rule for forwarding the ssh port 22, is called ssh
$ vboxmanage modifyvm "vbox-id" --natpf1 delete "5432"
You can list the nat rules by the following command:
VBoxManage showvminfo #{vmid}
You then get a lot of information about your VM including the forwarding rules, for example:
NIC 1 Rule(1): name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 2022, guest ip = , guest port = 22
Expanding on Andrew's answer, this lists the rules for all your VMs
for vm in `vboxmanage list vms | awk -F'"' '$0=$2'`
do
echo "Rules for VM $vm"
VBoxManage showvminfo $vm --machinereadable | awk -F '[",]' '/^Forwarding/ { printf ("Rule %s host port %-5d forwards to guest port %-5d\n", $2, $5, $7); }'
printf '\n'
done
You can delete a rule from the command line by issuing:
VBoxManage controlvm "boot2docker-vm" natpf1 delete "tcp-port80"
the last parameter in quotes is the rule name you wish to delete.

Jetty JMX open port 1099

I use CentOS 6.5 and Jetty 9.1.0.v20131115. I use Jetty's JMX capabilities.
I want to have JMX accessible only from within the running computer (localhost, or 127.0.0.0/8), but not from outside (e.g. JMX shall not be accessible from public.example.com).
Therefore, I configured Jetty's JMX RMI host to use jetty.jmxrmihost=localhost instead of a wildcard jetty.jmxrmihost=0.0.0.0.
Yet still, my Jetty server instance is accessible from "outside", allowing anyone to connect to my Jetty server via JMX.
What do I have to configure to make Jetty listen to only those JMX connections which originate from localhost?
Here are my Jetty configuration files that are relevant to this topic:
file ${jetty.base}/start.d/jmx.ini:
--module=jmx
#jetty.jmxrmihost=localhost # I tried this one, but it didn't work either
jetty.jmxrmihost=127.0.0.1
jetty.jmxrmiport=1099
file ${jetty.base}/start.d/jmx-remote.ini:
--module=jmx-remote
Just from the way the question is asked, it seems like it is less of a Jetty/JMX issue and more of a firewall issue - what you want is to block unwanted outside traffic to the JMX port on this server.
If you have permissions and are willing to do so, you will want to remove any rule from /etc/sysconfig/iptables that is opening the JMX port (in this example, 1099). Such a rule will look like the following:
[0:0] -A INPUT -s SOME_IP_SUBNET -p tcp -m tcp --dport 1099 -j ACCEPT
Or, on the flip side, you may want to enable JMX monitoring only for a specific subnet (such as for a company's subnet), at which point, you'd want to add the following:
[0:0] -A INPUT -s MY_IP_SUBNET_HERE -p tcp -m tcp --dport JMX_PORT -j ACCEPT
, replacing MY_IP_SUBNET_HERE and JMX_PORT with your own IP subnet and JMX port, respectively.
I haven't written a lot of rules for iptables myself, so please consider the above as an example and not necessarily the exact syntax you need. *nixCraft provides a basic guide to handling iptables/sysctl, which also covers how to modify rules without editing the file (I usually just modify the file).
Two notes, if you go the route of modifying the iptables file:
Be sure to restart iptables (/etc/init.d/iptables restart or service iptables restart)
Call /sbin/sysctl -p after restarting iptables. Restarting iptables wipes out any custom rules from sysctl.conf, calling sysctl -p will restore those rules.