LIST natpf rules in Virtualbox/Vagrant - virtualbox

I often get errors like this when running Vagrant:
VBoxManage: error: A NAT rule of this name already exists
VBoxManage: error: Details: code NS_ERROR_INVALID_ARG (0x80070057), component NATEngine, interface INATEngine, callee nsISupports
VBoxManage: error: Context: "AddRedirect(Bstr(strName).raw(), proto, Bstr(strHostIp).raw(), RTStrToUInt16(strHostPort), Bstr(strGuestIp).raw(), RTStrToUInt16(strGuestPort))" at line 1524 of file VBoxManageModifyVM.cpp
I'd like to remove all port forwarding rules before doing vagrant up, but I have trouble LISTING natpf rules. Is there any way to do it using vboxmanage or via some facilities in Vagrant?
Update: Vagrant version 1.3.4. I can replicate the problem as follows: start vm installation normal way (vagrant up) and force power off the vm during installation (this simulates e.g. failed install). Then the natpf1 port forwarding rule is left in the system. The only way to clean it up is like vboxmanage modifyvm #{vmid} --natpf1 delete rule_name, but you have to know the rule name beforehand... Furthermore, the rule stays there after vagrant destroy and it seems that my Ruby natpf1 clearing function present in Vagrant.configure is not ran, which means that the stale rule still clashes with "fresh" one that Vagrant attempts to create.

This got the job done for me:
VBoxManage showvminfo $VM_NAME --machinereadable | awk -F '[",]' '/^Forwarding/ { printf ("Rule %s host port %d forwards to guest port %d\n", $2, $5, $7); }'

I came here with the same problem; with your hint about deleting the rule I found that you can use the VirtualBox GUI to find the rules and delete them.
Of course, this only works when you are working on a machine with a GUI desktop.
Open the VirtualBox manager
Open the settings for the box in question (rmb -> settings, or the gear icon)
Select Network from the list on the left and open the Port Forwarding dialogue
From here you'll be able to directly remove the rules.
http://i.stack.imgur.com/6fQQc.png
Looking at the rules, it seems they just get a name that is equal to the port being set. So you can also look at the Vagrantfile, and search for a line like this:
db.vm.network :forwarded_port, guest: 5432, host: 5432
And guess that the name of the rule will be 5432. The name of the rule for forwarding the ssh port 22, is called ssh
$ vboxmanage modifyvm "vbox-id" --natpf1 delete "5432"

You can list the nat rules by the following command:
VBoxManage showvminfo #{vmid}
You then get a lot of information about your VM including the forwarding rules, for example:
NIC 1 Rule(1): name = ssh, protocol = tcp, host ip = 127.0.0.1, host port = 2022, guest ip = , guest port = 22

Expanding on Andrew's answer, this lists the rules for all your VMs
for vm in `vboxmanage list vms | awk -F'"' '$0=$2'`
do
echo "Rules for VM $vm"
VBoxManage showvminfo $vm --machinereadable | awk -F '[",]' '/^Forwarding/ { printf ("Rule %s host port %-5d forwards to guest port %-5d\n", $2, $5, $7); }'
printf '\n'
done

You can delete a rule from the command line by issuing:
VBoxManage controlvm "boot2docker-vm" natpf1 delete "tcp-port80"
the last parameter in quotes is the rule name you wish to delete.

Related

Unable to register AWS host to Ambari server

While registering a host to the cluster of Ambari-server, I am getting the following error.
"Host checks were skipped on 1 hosts that failed to register."
I'm trying to install HDP 2.5 version on the instance of AWS.
I have tried to follow the documentation of Hortonworks.
https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.0.3/bk_ambari-installation/content/set_the_hostname.html
I have added public ip address and public hostname to /etc/hosts file and change the name of host in /etc/hostname file on the server and on the host. Rebooted both, hostname got changed. Then I have stop iptables by
sudo service iptables stop
After doing everything, the host registration is still failing. Kindly help. I am stuck.
Background
From my experience with Ambari (Hortonworks) you have to explicitly setup your Hadoop nodes in each other's /etc/hosts file with the actual name/IPs that the Hadoop services will bind to. NOTE: hostnames should also be FQDN - fully qualified domain names.
For example if you're setting up the hosts as:
node01.mydom.com (10.0.0.2)
node02.mydom.com (10.0.0.3)
node03.mydom.com (10.0.0.4)
These entries should be in all 3 server's /etc/hosts and these should be the names used when referencing them within Ambari's installation/setup wizards.
If you do not pay special attention to this detail, Ambari's server will fail to find/manage any of the other node's that you're telling it to manage.
hostname of ambari-agents
The other item to look at is that the ambari-agent's and what hostnames they think they're going as.
$ ps -eaf|grep ambari_agent
root 3282 1 0 Jul30 ? 00:00:00 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/AmbariAgent.py start --expected-hostname=node01.mydom.com
root 3290 3282 1 Jul30 ? 08:24:29 /usr/bin/python /usr/lib/python2.6/site-packages/ambari_agent/main.py start --expected-hostname=node01.mydom.com
Debugging further
In the screen where you're attempting to register the other nodes as agents, there's a full log of what's happening and you can typically get the commands from this area and attempt to run them directly. I've done this on a number of occasions. The commands will often be python ... commands which you can then copy/paste from the logs and run on the Ambari server where you're attempting to run the install.

Can't connect to VM running Django

Using VirtualBox, I have a NAT enabled VM running Centos 7. The host OS is Windows 7. I can't seem to access the Django web server running inside the VM. What am I missing?
I have two port forwarding rules set for the Virtual Machine:
I start the Django web server on the guest OS with:
python manage.py runserver 0.0.0.0:8000
And I try to visit the webpage on the host OS at:
http://localhost:8000
Google Chrome gives me the error code ERR_CONNECTION_RESET.
The result of curl on the host OS:
[user#win7 ~ ]$ curl http://localhost:8000
curl: (56) Recv failure: Connection reset by peer
Here is the result of a netstat performed on the guest OS:
[user#vm ~ ]$ netstat -na | grep 8000
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN
Here is the result of a netstat performed on the host OS (with Cygwin):
[user#win7 ~ ]$ netstat -na | grep 8000
TCP 0.0.0.0:8000 0.0.0.0:0 LISTENING
It is also worth mentioning that the SSH rule works. I can SSH into the machine with no problems.
This is not a solution, but a work-around for my problem. Maybe this will help anyone encountering a problem similar to mine, and just wants to be able to connect to their VM's webserver.
Since SSH was working, I figured I could access the webpage via a SSH Tunnel. The syntax for doing so via command line is:
ssh -L <local-port>:<remote-host>:<remote-port>
So in my situation, if I wanted to open a tunnel via command line I would do:
ssh -L 8000:127.0.0.1:8000
This would allow me to browse to http://localhost:8000 and access the website.
You can also do this via PuTTY, but I won't explain that here, so just Google for a guide.
The ssh tunnel is an OK work around, but the problem is almost certainly CentOS 7 which now uses firewalld rather than iptables to manager access. And, unlike iptables the default configuration is quite restrictive.
if
ps -ae | grep firewall
returns something like
602 ? 00:00:00 firewalld
your system is running firewalld, not iptables. They do not run together.
To correct your VM so you can access your django site from the host use the commands:
firewall-cmd --zone=public --add-port=8000/tcp --permanent
firewall-cmd --reload
Many thanks to pablo v in the post "Access django server on virtual Machine" for pointing this out.

Getting acces to host VM in a VirtualBox with Puppet

I have an application running in a VirtualBox VM that for testing purposes needs to connect to the host machine. The VM is started with Vagrant and managed by Puppet.
What is the best way to set-up this connection? For example, om my host machine the app runs on port 9200. So from my VM I'd like to go to myhostmachine:9200.
Currently I'm thinking of hacking in a small command that adds hostvm as a entry to /etc/hosts using a simple command like this to figure out my host ip (which is the same as the default route).
/sbin/ip -4 route list 0/0 | grep -m 1 default | awk '/default/ { print $3 }'
And just let Puppet run that every time using the exec functionality. However, I get the feeling there has to be a better way.
The guest OS is Ubuntu 12.04 and the Host is OS-X.
Thanks!
As far as I know, at the moment Vagrant always sets up a natted interface to connect to virtualbox, so I think that the ip of your host machine will always be the 10.0.2.2 address you mentioned. I reckon a puppet host declaration might be easier to manage than running that command each time.
host { 'myhostmachine':
ip => '10.0.2.2',
}
The puppet resource reference for hosts has all the other params you can set too.
I was then able to access the host using myhostmachine:9200

Jetty JMX open port 1099

I use CentOS 6.5 and Jetty 9.1.0.v20131115. I use Jetty's JMX capabilities.
I want to have JMX accessible only from within the running computer (localhost, or 127.0.0.0/8), but not from outside (e.g. JMX shall not be accessible from public.example.com).
Therefore, I configured Jetty's JMX RMI host to use jetty.jmxrmihost=localhost instead of a wildcard jetty.jmxrmihost=0.0.0.0.
Yet still, my Jetty server instance is accessible from "outside", allowing anyone to connect to my Jetty server via JMX.
What do I have to configure to make Jetty listen to only those JMX connections which originate from localhost?
Here are my Jetty configuration files that are relevant to this topic:
file ${jetty.base}/start.d/jmx.ini:
--module=jmx
#jetty.jmxrmihost=localhost # I tried this one, but it didn't work either
jetty.jmxrmihost=127.0.0.1
jetty.jmxrmiport=1099
file ${jetty.base}/start.d/jmx-remote.ini:
--module=jmx-remote
Just from the way the question is asked, it seems like it is less of a Jetty/JMX issue and more of a firewall issue - what you want is to block unwanted outside traffic to the JMX port on this server.
If you have permissions and are willing to do so, you will want to remove any rule from /etc/sysconfig/iptables that is opening the JMX port (in this example, 1099). Such a rule will look like the following:
[0:0] -A INPUT -s SOME_IP_SUBNET -p tcp -m tcp --dport 1099 -j ACCEPT
Or, on the flip side, you may want to enable JMX monitoring only for a specific subnet (such as for a company's subnet), at which point, you'd want to add the following:
[0:0] -A INPUT -s MY_IP_SUBNET_HERE -p tcp -m tcp --dport JMX_PORT -j ACCEPT
, replacing MY_IP_SUBNET_HERE and JMX_PORT with your own IP subnet and JMX port, respectively.
I haven't written a lot of rules for iptables myself, so please consider the above as an example and not necessarily the exact syntax you need. *nixCraft provides a basic guide to handling iptables/sysctl, which also covers how to modify rules without editing the file (I usually just modify the file).
Two notes, if you go the route of modifying the iptables file:
Be sure to restart iptables (/etc/init.d/iptables restart or service iptables restart)
Call /sbin/sysctl -p after restarting iptables. Restarting iptables wipes out any custom rules from sysctl.conf, calling sysctl -p will restore those rules.

Vagrant/Puppet Connection Timeout (Obvious Fixes Attempted, Working Previously)

For some reason my Vagrant/Puppet instance stopped working out of the blue--I am no longer able to reach the VM from my host machine, despite no configuration or network changes.
Interestingly, the private network must be recognized as the browser is attempting to connect, however the request seems to be timing out when issued from OSX... Also worth noting, I have not installed any system updates at this time. The VM was working previously on 10.9.
Steps I have tried to resolve the issue:
vagrant destroy && vagrant up
Result: Vagrant loads properly, SSH works and apache is running with the proper result returned from ping 127.0.0.1
vagrant reload
Result: Same as above; VM reloads successfully, no change in network accessibility
sudo killall -HUP mDNSResponder
Result: No change in accessibility via the bound IP (10.0.0.100)
Port forwarding (explicit) vs "private_network" in vagrant file
Result: No change in accessibility via the bound IP (10.0.2.15)
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
Result: No change in accessibility via the bound IP, connection still times out
Vagrant File: http://pastebin.com/Hk8drWxF
Puppet File: http://pastebin.com/20Sp1m22
Any thoughts? Thanks!
Could this be an issue with netmask ? You specify 2 ips there : 10.0.0.100 and 10.0.2.15 if you're using default subnet (class C) you would end up on different subnets and be unable to speak directly to each other.