I have to create network controller on Centos 7 - centos7

I have make setup like controller works in middle on which Centos 7.eth0 of middle controller is connected to Internet.eth1 is connected to laptop/router(LAN).I have to forward traffic from eth0 to eth1.i have to control eth1 traffic from controller .
Problem:I am unable to ping and send traffic from eth0 to eth1??Internet to eth0 is working fine .controller to eth1 is not working??
please Help!!
Thanx

As you probably not have running a DHCP Server on your CentOS machine you should set a static IP for both maschines. On CentOS you can do this using
ifconfig eth1 192.168.178.1
Then on the other end of eth1 do
ifconfig eth0 192.168.178.2
You may also have to enable IP forwarding on CentOS doing
sudo echo 1 > /proc/sys/net/ip_v4/ip_forward

Related

OpenVPN client to SSH to EC2 private instance

I'm running the community OpenVPN server (on a CIS Level 1 RHEL 7) instance, which I can connect from my laptop without any issue. Whilst connected, I can SSH to the OpenVPN server instance using the private IP but not anything else at all. Not even a different instance in the same sub-net. Say my VPN server in: 10.100.0.0/28 subnet, VPN client subnet is: 192.168.10.0/24 and I want SSH to an instance in 10.100.0.16/28. This is the part I have in the server config:
push "redirect-gateway def1 bypass-dhcp"
push "route 10.100.0.16 255.255.255.240"
push "route 10.100.0.32 255.255.255.240"
;push "route 10.100.0.0 255.255.240.0"
route 10.100.0.16 255.255.255.240
route 10.100.0.32 255.255.255.240
;route 10.100.0.0 255.255.240.0
server 192.168.10.0 255.255.255.0
I have added these iptables rules to allow the VPN traffic:
## allow udp 1194
iptables -A INPUT -p udp -m udp --dport 1194 -m state --state NEW -j ACCEPT -i eth0
## Allow TUN interface
iptables -A INPUT -i tun+ -j ACCEPT
## Allow TUN connections to be forwarded
iptables -A FORWARD -i tun+ -j ACCEPT
iptables -A FORWARD -i tun+ -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0 -o tun+ -m state --state RELATED,ESTABLISHED -j ACCEPT
## NAT the VPN client traffic to the Internet
iptables -t nat -A POSTROUTING -s 192.168.10.0/24 -o eth0 -j MASQUERADE
## default TUN OUTPUT
iptables -A OUTPUT -o tun+ -j ACCEPT
apart from that also,
added net.ipv4.ip_forward = 1 to /etc/sysctl.conf
Disabled source/destination check on the VPN instance
added a static route to VPC route table with Destination: 192.168.10.0/24, Targeting the ENI that attached to the VPN instance
added ingress rule in the target instances' SG to allow vpn-client subnet on port 22
There is no NACL involved yet (but have to enable that at some point)
What else didn't do or did wrong?? I'm really stuck and know I'm missing some thing really silly. Could anyone shade some light or point me to right direction please?
-S
Figured out why it was not working. These two lines:
route 10.100.0.16 255.255.255.240
route 10.100.0.32 255.255.255.240
in the config file were causing the issue. Without those, it forwarding the traffic downstream without any issue. I'm a bit confused though from the OpenVPN documentation on route ... and push "route ..., so not really sure why those two lines were causing connection issue. So, if anyone can shade some light on that will be very much appreciated.

What is the simplest way to get vagrant/virtualbox to access host services?

I've been reading through many examples (both here and through various blogs and virtualbox/vagrant documentation) and at this point I think I should be able to do this.
What I ultimately would like to do is communicate with my docker daemon on my host machine and all the subsequent services I spin up arbitrarily.
To try to get this to work, I run the simple nginx container on my host and confirm it works:
$ docker run --name some-nginx -d -p 8080:80 docker.io/library/nginx:1.17.9
$ curl localhost:8080
> Welcome to nginx!
In my Vagrantfile I've define my host-only network:
config.vm.network "private_network", ip: "192.168.50.4",
virtualbox__intnet: true
Now in my guest vagrant box, I expect that I should be able to access this same port:
$ curl localhost:8080
> curl: (7) Failed to connect to localhost port 8080: Connection refused
$ curl 127.0.0.1:8080
> curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
$ curl 192.168.50.4:8080 # I hope not, but maybe this will work?
> curl: (7) Failed to connect to 192.168.50.4 port 8080: Connection refused
If you're "inside" the Vagrant guest machine, localhost will be the local loopback adapter of THAT machine and not of your host.
In VirtualBox virtualization, which you are using, you can always connect to services running on your hosts' localhost via the 10.0.2.2 address. See: https://www.virtualbox.org/manual/ch06.html#network_nat
So in your case, with the web server running on port 8080 on your host, using
curl 10.0.2.2:8080
would mean success!
run vagrant up to start the vm and use NAT as network interface, which means guest vm will run as equal as the host in the same network.
vagrant ssh into the vm and install net-tools if you machine doesn't have tool netstat
use netstat -rn to find any routable gateway. Below gateways 10.0.2.2, 192.168.3.1 they're the gateways present in guest vm.
[vagrant#localhost ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 192.168.3.1 0.0.0.0 UG 0 0 0 eth2
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
192.168.33.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Go to host and run ifconfig. Find out the gateway 192.168.3.1 shared at host and host possesses IP 192.168.3.x. And make sure service at host can be accessed at 192.168.3.x.
And make sure service at host can be accessed at 192.168.3.x.
try it at host curl -v http://<192.168.3.x>:<same port on host>, if can accessed, ok.
Now go to guest vm, try curl -v http://<192.168.3.x>:<same port on host>. If can be accessed too
now you can access services at host from guest vm.

Google Cloud direct default port to GlassFish port

A GlassFish application hosted in a Google Cloud VM Instance is running in port 8080. I need to direct traffic of default port 80 to port 8080. What is the best way to achieve that?
I tried to set port 80 as GlassFish port, but failed as on Ubuntu we can't listen on a port lower than 1024.
You can use the Linux feature iptables to redirect traffic received on one port to a different port.
sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
/etc/init.d/iptables save
Double-check the documentation as you do not mention the version of Linux that you are running.
Create an instance group for your VM. Create a Load Balancer with that directs external port 80 traffic to port 8080 on your VM.

no communication - ec2 instances with two interfaces in different subnets

I am stuck with the seemingly simple configuration on AWS - spin up VMs with 2 interfaces each, but each interface is in a different subnet and I can't communicate over secondary interfaces. Important piece: inside a VM I can communicate to all interfaces, between VMs in public/private zones - only over eth0.
Overview:
VPC 10.20.0.0/16
public zone:
management interface in subnet 10.20.0.0/20
production interface in subnet 10.20.48.0/20
private zone:
management interface in subnet 10.20.16.0/20
production interface in subnet 10.20.64.0/20
Network ACLs are open/default, all interfaces have a security group which allows ping from 0.0.0.0/0
When I spin up VMs with RHEL7.5, I have this ec2-user-data script to bring up the secondary interface:
cat <<EOF > /etc/sysconfig/network-scripts/ifcfg-eth1
BOOTPROTO=dhcp
DEVICE=eth1
ONBOOT=yes
TYPE=Ethernet
USERCTL=no
EOF
ifup eth1e
Ping over the eth0 works without any issues, ping over eth1 hangs.
Here is routing on VM in private zone:
[ec2-user#ip-10-20-8-62 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.8.62 netmask 255.255.240.0 broadcast 10.20.15.255
[ec2-user#ip-10-20-8-62 ~]$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.53.116 netmask 255.255.240.0 broadcast 10.20.63.255
[ec2-user#ip-10-20-8-62 ~]$ ip route
default via 10.20.0.1 dev eth0 proto dhcp metric 100
default via 10.20.48.1 dev eth1 proto dhcp metric 101
10.20.0.0/20 dev eth0 proto kernel scope link src 10.20.8.62 metric 100
10.20.48.0/20 dev eth1 proto kernel scope link src 10.20.53.116 metric 101
[ec2-user#ip-10-20-8-62 ~]$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
And the same for the VM in private zone:
[ec2-user#ip-10-20-19-55 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.19.55 netmask 255.255.240.0 broadcast 10.20.31.255
[ec2-user#ip-10-20-19-55 ~]$ ifconfig eth1
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 9001
inet 10.20.68.48 netmask 255.255.240.0 broadcast 10.20.79.255
[ec2-user#ip-10-20-19-55 ~]$ ip route
default via 10.20.16.1 dev eth0 proto dhcp metric 100
default via 10.20.64.1 dev eth1 proto dhcp metric 101
10.20.16.0/20 dev eth0 proto kernel scope link src 10.20.19.55 metric 100
10.20.64.0/20 dev eth1 proto kernel scope link src 10.20.68.48 metric 101
[ec2-user#ip-10-20-19-55 ~]$ ip rule
0: from all lookup local
32766: from all lookup main
32767: from all lookup default
Please let me know if I can provide some additional info, I spent too much time already trying to make it work. The reason for such a setup is our internal policies at the company. And I will need to make it work with 3 interfaces later on as well, so trying to understand what am I doing wrong here.
As I've seen in AWS docummentation you need to add a different route table for your secondary network interface becasue in some way, AWS traffic from your secondary interface leaves with MAC from primary interface and this is not allowed.
Both the primary and the secondary network interfaces are in different subnets, and by default there is only one routing table. Only one of the network interfaces is used to manage non-local subnet traffic. Any non-local subnet traffic that comes into the network interface that isn't configured with the default gateway tries to leave the instance using the interface that has the default gateway. This isn't allowed, because the secondary IP address doesn't belong to the Media Access Control (MAC) address of the primary network interface.
Please follow this guide to solve this issue.
I've tested it in CentOS 7 and it works.

HAProxy multiple frontend/listeners

I want to use one HAProxy host to direct traffic from multiple frontend/listener IPs to respective backends.
Is there any way to easily accomplish this on Debian/Centos host?
Not using dcoker or anything else, just installing haproxy to offload tcp connections to multiple other servers.
All the information I have read either directs me to ACLs, which would be extreme as we have thousands of domains spread across a number of 'backend' servers, or shows the listener on ' * ' which is any, of course.
We were using cisco switch load balancing and now want to do the work in VMs with easy to digest monitoring of the requests to various servers, adding and removing resources as we need.
HAProxy starts fine and in the netstat -pln shows the service on each of the IPs we had configured in the load balancer.
The solution is painfully simple:
On debian based systems:
Configure your /etc/network/interfaces file to use virtual network interfaces with something like:
# The primary or physical network interface
auto eth0
allow-hotplug eth0
iface eth0 inet static
address 192.168.0.10
netmask 255.255.255.0
gateway 192.168.0.1
dns-nameservers 8.8.8.8 8.8.4.4
# first virtual interface
auto eth0:0
allow-hotplug eth0:0
iface eth0:0 inet static
address 192.168.0.11
netmask 255.255.255.0
# second virtual interface
auto eth0:1
allow-hotplug eth0:1
iface eth0:1 inet static
address 192.168.0.12
netmask 255.255.255.0