How to setup port forwading in KVM natively? - centos7

I using Virsh to manage virtual machines using KVM. Have a main host and virtual machine with http service (port 80). How to setup the port forwading to expose http service to internet without using firewalls such as iptables, firewalld or route?
Actualy i using iptables, is very hard to manage all connections type.
I can setup rules into KVM to redirect http traffic between main host and virtual machine?, by example in VirtualBox have a port forwading option in network configuration, say host ip, host port, virtual ip, virtual port and listen port in main host as localhost (0.0.0.0).
Howto made this in KVM?
I try edit the XML using native port forwading with qemu artguments but does not work:
# virsh edit demo
> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
> <name>...
> ...
> <qemu:commandline>
> <qemu:arg value='-redir'/>
> <qemu:arg value='tcp:80::80'/>
> </qemu:commandline>
> </domain>
# virsh start demo
# ps -aux | grep qemu
root 30119 58.8 2.9 3421616 330084 ? Sl 15:38 0:07 qemu-system-x86_64 -enable-kvm -name demo -S -machine pc-i440fx-xenial,accel=kvm,usb=off -cpu Haswell -m 2048 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid c996f3b2-5e16-470f-9ad6-e591fc9a2537 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-demo/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x4.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x4 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x4.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x4.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x3 -drive file=/var/kvm/images/demo.img,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:43:ed:8d,bus=pci.0,addr=0x2 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-demo/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -redir tcp:80::80 -msg timestamp=on
Now, open from host http://192.168.123.91/ works fine, but http://127.0.0.1/ not connect, but the qemu command says -redir tcp:80::80.
What did I do wrong?

what is your interface type? for port forwarding to work as per the qemu man page interface type has to be user. like
<interface type="user">

Related

How to let tomcat 8.5 run on port 80 on AWS Linux 2

I have been running Tomcat 7 on port 80 on AWS Linux. I run it as root and it has been working great. No issue.
Now, I have created a new AWS EC2 instance (t2.micro) using Linux 2 and I have installed java 11 and tomcat 8.5 on it. But I am not able to get tomcat to bind on port 80. It runs fine on 8080 but not on 80. The error is
Caused by: java.net.SocketException: Permission denied
Mar 18 08:30:54 ----.compute.internal server[3467]: at java.base/sun.nio.ch.Net.bind0(Native Method)
What I have already done:
I have updated's tomcat's server.xml to use port 80 (changed 8080 to 80)
Tried running service as root: sudo tomcat start
Did "sudo su -" and then ran "tomcat start"
There was no TOMCAT_USER variable in tomcat.conf, so I added it
TOMCAT_USER="root"
Tried running: iptables -A INPUT -p tcp --dport 80 -j ACCEPT
Can someone please tell me what do I need to do to fix it? I just need tomcat to be my webserver listening on 80 and 443. I don't want httpd, ngnix, or, port forwarding.
Here is the output of /etc/os-release:
cat /etc/system-release
Amazon Linux release 2 (Karoo)
[root#ip-172-31-32-37 ec2-user]# cat /etc/os-release
NAME="Amazon Linux"
VERSION="2"
ID="amzn"
ID_LIKE="centos rhel fedora"
VERSION_ID="2"
PRETTY_NAME="Amazon Linux 2"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
HOME_URL="https://amazonlinux.com/"

nmap reports closed port Centos 7 while a pid is running on this port

On a CentOS Linux 7 machine, I have a web app served on port 1314
$ netstat -anp | grep 1314
tcp 0 0 127.0.0.1:1314 0.0.0.0:* LISTEN 1464/hugo
tcp 0 0 127.0.0.1:60770 127.0.0.1:1314 TIME_WAIT -
and I can curl it locally.
I opened port 1314:
iptables-save | grep 1314
-A IN_public_allow -p tcp -m tcp --dport 1314 -m conntrack --ctstate NEW,UNTRACKED -j ACCEPT
I checked with nmap locally:
PORT STATE SERVICE
1314/tcp open pdps
Everything seems fine.
Now if I try to curl the web app from another machine I get connection refused.
When I try nmap from the remote machine:
PORT STATE SERVICE
1314/tcp closed pdps
So the firewall doesn't block the port, but it looks like there is no one listening on port 1314...
But we know that the web app is running on this endpoint so what is going on??
Having a process listening to a port (and that port is open and properly configured) is not enough to enable remote communication. The local address needs to be on the same network as the remote address too!
Here, on the netstat printout, we can see that the local address is localhost (127.0.0.1 or ::1). Localhost is obviously not on the same network as the remote machine I was using to curl my web app. This explains also why nmap was reporting a closed port (meaning that nothing was listening on the local end).
Note: to listen to all the network interfaces, the local address should be 0.0.0.0 or :::.

Nmap can't find a listening port

I created a AWS instance today, and I am running a server and listen to 19999 port. let's see what I got:
root#ip-172-31-18-145:/home/ubuntu# sudo lsof -i -P -n | grep 19999
ssserver 20387 root 4u IPv4 65547 0t0 TCP *:19999 (LISTEN)
ssserver 20387 root 5u IPv4 65548 0t0 UDP *:19999
But i couldn't connect my port on my remote client-side, so I was trying to use nmap. here what I got.
root#ip-172-31-18-145:/home/ubuntu# nmap -Pn 127.0.0.1
Starting Nmap 7.60 ( https://nmap.org ) at 2020-02-15 13:47 UTC
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0000030s latency).
Not shown: 999 closed ports
PORT STATE SERVICE
22/tcp open ssh
My question is what's wrong with nmap? To make sure the port is listening I am running nc to try to listen the 19999 again. and here is the output:
ubuntu#ip-172-31-18-145:~$ nc -l 19999
nc: Address already in use
Nothing is wrong with nmap by default it only scan a 1000 most common ports. You can you use nmap -Pn 127.0.0.1 -p 19999

Host key verification failed in google compute engine based mpich cluster

TLDR:
I have 2 google compute engine instances, I've installed mpich on both.
When I try to run a sample I get Host key verification failed.
Detailed version:
I've followed this tutorial in order to get this task done: http://mpitutorial.com/tutorials/running-an-mpi-cluster-within-a-lan/.
I have 2 google compute engine vms with ubuntu 14.04 (the google cloud account is a trial one, btw). I've downloaded this version of mpich on both instances: http://www.mpich.org/static/downloads/3.3rc1
/mpich-3.3rc1.tar.gz and I installed it using these steps:
./configure --disable-fortran
sudo make
sudo make install
This is the way the /etc/hosts file looks on the master-node:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata
10.128.0.3 client
10.128.0.2 master
10.128.0.2 linux1.us-central1-c.c.ultimate-triode-161918.internal linux
1 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
And this is the way the /etc/hosts file looks on the client-node:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata
10.128.0.2 master
10.128.0.3 client
10.128.0.3 linux2.us-central1-c.c.ultimate-triode-161918.internal linux
2 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
The rest of the steps involved adding an user named mpiuser on both nodes and configuring passwordless ssh authentication between the nodes. And configuring a cloud shared directory between nodes.
The configuration worked till this point. I've downloaded this file https://raw.githubusercontent.com/pmodels/mpich/master/examples/cpi.c to /home/mpiuser/cloud/mpi_sample.c, compiled it this way:
mpicc -o mpi_sample mpi_sample.c
and issued this command on the master node while logged in as the mpiuser:
mpirun -np 2 -hosts client,master ./mpi_sample
and I got this error:
Host key verification failed.
What's wrong? I've tried to troubleshoot this problem over more than 2 days but I can't get a valid solution.
Add
package-lock.json
in ".gcloudignore file".
And deploy it again.
It turned out that my password less ssh wasn't configured properly. I've created 2 new instances and did the following things to get a working password less and thus get a working version of that sample. The following steps were execute on an ubuntu server 18.04.
First, by default, instances on google cloud have PasswordAuthentication setting turned off. In the client server do:
sudo vim /etc/ssh/sshd_config
and change PasswordAuthentication no to PasswordAuthentication yes. Then
sudo systemctl restart ssh
Generate a ssh key from the master server with:
ssh-keygen -t rsa -b 4096 -C "user.mail#server.com"
Copy the generated ssh key from the master server to the client
ssh-copy-id client
Now you get a fully functional password less ssh from master to client. However mpich still failed.
The additional steps that I did was to copy the public key to the ~/.ssh/authorized_keys file, both on master and client. So execute this command from both servers:
sudo cat .ssh/id_rsa.pub >> .ssh/authorized_keys
Then make sure the /etc/ssh/sshd_config files from both the client and server have the following configurations:
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no
Restart the ssh service from both client and master
sudo systemctl restart ssh
And that's it, mpich works smoothly now.

Running APE Server in a Virtual Machine

I have been trying to set-up an Ajax Push Engine (APE) Server in a virtual machine, and have run into a bit of a snag. The problem is that the APE server cannot be accessed outside of the virtual machine.
Setup:
Guest OS: Ubuntu 10.10 (I believe) with the ape package installed
IP Address: 192.168.56.1 using a host-only network adapter
APE Server running on port 6969
If I try wget 127.0.0.1:6969 in the virtual machine, I get a response.
If I try wget 192.168.56.1:6969 from the host OS, I get a Connection Refused message.
If I ping 192.168.56.1, I also get a response.
Any help would be greatly appreciated!
I ended up redoing everything from scratch, and it worked, so I must have got it right somehow. For the benefit of others...
To get APE Server running in a virtual machine (in particular, using VirtualBox), you need to do the following:
Setting up the environment
Download and install VirtualBox
Open VirtualBox, and go to File > Preferences, then Network
Confirm that there exists a host-only network vboxnet0 (if not, create it). Take note of its IPv4 address (192.168.56.1, in my case)
Create a new Ubuntu Virtual Machine
Start the Virtual Machine
Getting the Libraries
Add the PPA for libmysqlclient15off, a pre-requisite for APE Server:
username# gpg --keyserver hkp://keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
username# gpg -a --export CD2EFD2A | apt-key add -
sudo sh -c 'echo "deb http://repo.percona.com/apt maverick main" >> /etc/apt/sources.list.d/percona.list'
Install libmysqlclient15off
sudo apt-get update; sudo apt-get install libmysqlclient15off
Get and install the latest version of APE server
Edit /etc/network/interfaces, and add the following to the end:
auto eth0
iface eth0 inet static
address 192.168.56.101
netmask 255.255.255.0
Close the virtual machine and go into its settings. Change the network settings for the first interface to Host-only Adapter
Setting Up APE
Restart the Virtual Machine, and ensure that the APE daemon is running
username# ps -ef | grep "aped"
If you need to, make changes to /etc/ape/ape.conf
Final Steps
Add the following to your hosts file, or some variation:
192.168.56.101 local.site.com
192.168.56.101 0.local.site.com
192.168.56.101 1.local.site.com
192.168.56.101 2.local.site.com
192.168.56.101 3.local.site.com
192.168.56.101 4.local.site.com
192.168.56.101 5.local.site.com
192.168.56.101 6.local.site.com
192.168.56.101 7.local.site.com
192.168.56.101 8.local.site.com
192.168.56.101 9.local.site.com
Access your new APE server via local.site.com:6969
Check APE config file. Are you binding to the right IP ? By default it's 127.0.0.1