I need to build a lab with particulatly following params:
1 VM uses host only adapter vboxnet0 (subnet is .56.0/24).
1 VM uses
host only adapter vboxnet1 (subnet is .57.0/24).
Host OS is Linux Mint.
HyperVisor - VirtualBox.
Rotung table on the host:
default 192.168.0.1 0.0.0.0 UG 100 0 0 enp4s0
link-local * 255.255.0.0 U 1000 0 0 enp4s0
192.168.0.0 * 255.255.255.0 U 100 0 0 enp4s0
192.168.56.0 * 255.255.255.0 U 0 0 0 vboxnet0
192.168.57.0 * 255.255.255.0 U 0 0 0 vboxnet1
/proc/sys/net/ipv4/ip_forward is set to 1 (enabled)
iptables:
Chain FORWARD (policy ACCEPT)
target prot opt source destination
So the issue is that both VMs can only ping its gateway integface. but cannot reach each other. What i am missing?
I'm not much on network and ip, but to me i see 192.168.56.0 will only be able to see 192.168.57.0 if the subnet mask is 255.255.0.0 otherwise they are different networks
Related
i am trying to send udp packets from the dpdk machine using pktgen-dpdk to the DUT. but i cannot see any thing sent in the stats and also nothing received on the DUT.
here is the configuration i am using :
dpdk verion: DPDK 20.11.0
pkt-gen version: 20.11.3
ena driver version: 2.4.0
os :amazon linux 2 aws ec2 instance
the pkt-gen pkt file is as following:
stop 0
set 0 dst mac 02:EC:BC:CD:C7:D6 # i try both dst mac address for the gateway and for the DUT
set 0 src ip 192.168.2.187/24
set 0 dst ip 192.168.2.197
set 0 sport 22
set 0 dport 22
set 0 type ipv4
set 0 proto tcp
set 0 size 64
start 0
i also tried multiple diffrent protocols and even a simple icmp by enabling icmp for the port and using ping4 but nothing is sent. the port status is as follwoing:
port o status
in addition to that when i try to use testpmd to send traffic
i get this :
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 11 RX-dropped: 0 RX-total: 11
TX-packets: 231819494 TX-dropped: 2029505748 TX-total: 2261325242
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 11 RX-dropped: 0 RX-total: 11
TX-packets: 231819494 TX-dropped: 2029505748 TX-total: 2261325242
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
seems all transmissions are immediately dropped
Issue lies in pkt-gen not able identify the DPDK NIC ENA, because it is build in shared library mode. To fix the issue
identify the folder where the ENA PMD is located with find / -name "librte_*.so" | grep ena
set the environment with path with export LD_LIBRARY_PATH=[path to pmd folder]
make sure to run pktgen with arguments -d librte_net_ena.so -l 1-3 -- -P -m "2.0, 3.1"
note: solved the issue via live debug too.
I am not able to reach any domain via CURL from the AWS instance. I have checked all the configuration everything seems ok to me. Attaching output of few commands.
iptables
root#ip-172-31-26-121:~# iptables -nvL
Chain INPUT (policy ACCEPT 4177 packets, 404K bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 3751 packets, 930K bytes)
pkts bytes target prot opt in out source destination
root#ip-172-31-26-121:~#
UFW not installed
netstat
root#ip-172-31-26-121:~# !netstat
netstat -ntup -l
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 870/nginx: master p
tcp 0 0 127.0.0.1:3306 0.0.0.0:* LISTEN 884/mysqld
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 870/nginx: master p
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 641/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 844/sshd
tcp6 0 0 :::80 :::* LISTEN 870/nginx: master p
tcp6 0 0 :::22 :::* LISTEN 844/sshd
udp 0 0 127.0.0.53:53 0.0.0.0:* 641/systemd-resolve
udp 0 0 172.31.26.121:68 0.0.0.0:* 618/systemd-network
curl
root#ip-172-31-26-121:~# curl -v facebook.com
* Rebuilt URL to: facebook.com/
* Trying 185.60.216.35...
* TCP_NODELAY set
^C
root#ip-172-31-26-121:~#
One more thing which I noticed is everything works after instance reboot for 30-40 sec, then again it stops working
Can anyone please suggest what else can i check?
I'm new with Google Cloud so I may explain not precise.\
I have VM with Ubuntu 18.04 at Google Cloud Platform and I have installed Squid 3 proxy server on it.
Proxy is already configured a little.
http_port 3128 transparent
auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/passwd
auth_param basic children 2
auth_param basic realm My Proxy Server
auth_param basic credentialsttl 24 hours
auth_params basic casesensitive off
#add acl rules
acl users proxy_auth REQUIRED
#http access rules
http_access deny !users
http_access allow users
In Google console I can see server's outer IP address but It does not work through it.
The ifconfig command shows next
ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.156.0.3 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::4001:aff:fe9c:3 prefixlen 64 scopeid 0x20<link>
ether 42:01:0a:9c:00:03 txqueuelen 1000 (Ethernet)
RX packets 104399 bytes 83418274 (83.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 93840 bytes 12598292 (12.5 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 16697 bytes 1149429 (1.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16697 bytes 1149429 (1.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
where inet 10.156.0.3 means my inner IP.
I suppose I don't understand some simple rule of work with Google Platform or just with proxy configuration.
May you show me where I'm wrong?
Thank you.
To solve your issue you need to check with nmap which ports are open on your VM and if 3128 is closed set Network tag for your VM and add firewall rule to allow access to it.
I've tried to replicate your issue on my test VM:
create VM instance or use existing one
install Squid
check if Squid is running:
$ sudo systemctl status squid
● squid.service - LSB: Squid HTTP Proxy version 3.x
Loaded: loaded (/etc/init.d/squid; generated)
**Active: active (running)** since Wed 2020-02-19 11:47:50 UTC; 26s ago
check accessibility to Squid with nmap:
$ nmap -Pn 35.XXX.155.XXX
Starting Nmap 7.80 ( https://nmap.org ) at 2020-02-19 12:53 CET
...
Host is up (0.023s latency).
Not shown: 996 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3389/tcp closed ms-wbt-server
8000/tcp closed http-alt
8081/tcp closed blackice-icecap
Squid is not available
edit VM and set Network tag proxy-server
add firewall rule to enable connections to Squid by using Network tag:
$ gcloud compute --project=test-prj firewall-rules create proxy-server-rule --direction=INGRESS --priority=999 --network=default --action=ALLOW --rules=tcp:3128 --source-ranges=0.0.0.0/0 --target-tags=proxy-server
check accessibility to Squid with nmap again
$ nmap -Pn 35.XXX.155.XXX
Starting Nmap 7.80 ( https://nmap.org ) at 2020-02-19 12:53 CET
...
Host is up (0.022s latency).
Not shown: 995 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3128/tcp open squid-http
3389/tcp closed ms-wbt-server
8000/tcp closed http-alt
8081/tcp closed blackice-icecap
now Squid is ready to use.
Env info:
I build my k8s cluster with virtualbox on mac.The node os is centos7.3. There are two node and one master, all of which's network is Net(can visit the public network) and Host-Only (can visit the inner network).The IP info is following:
master:
network enp0s3 :192.168.99.100/24 (Host-only network,node1 and node2 can visit this IP)
network enp0s8 :10.0.3.15/24 (Net network)
node1:
network enp0s3 :192.168.57.3/24 (Host-only network,master and node1 can visit this IP)
network enp0s8 :10.0.3.16/24(Net network)
node2:
network enp0s3 :192.168.58.2/24(Host-only network, master and node1 can visit this IP)
network enp0s8 :10.0.3.17/24(Net network)
k8s version is:
kubernetes(v1.5.2),ectd( 3.1.7),flannel(0.7.0)。
Network set on master:
etcdctl set /atomic.io/network/config '{"Network":"172.17.0.0/16"}'
flannel set on node1:
/run/flannel/subnet.env
FLANNEL_NETWORK=172.17.0.0/16
FLANNEL_SUBNET=172.17.94.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=true
/etc/sysconfig/flanneld:
FLANNEL_ETCD_ENDPOINTS="http://192.168.99.100:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_OPTIONS="-iface=enp0s3 -public-ip=192.168.57.3 -ip-masq=true"
flannel set on node2:
/run/flannel/subnet.env :
FLANNEL_NETWORK=172.17.0.0/16
FLANNEL_SUBNET=172.17.50.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=true
/etc/sysconfig/flanneld:
FLANNEL_ETCD_ENDPOINTS="http://192.168.99.100:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"
FLANNEL_OPTIONS="-iface=enp0s3 -public-ip=192.168.58.2 -ip-masq=true"
node1's route:
flannel0:172.17.94.0/16
docker0:172.17.94.1/24
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.3.2 0.0.0.0 UG 100 0 0 enp0s8
10.0.3.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel0
172.17.94.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.57.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
and
default via 10.0.3.2 dev enp0s8 proto static metric 100
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.16 metric 100
172.17.0.0/16 dev flannel0 proto kernel scope link src 172.17.94.0
172.17.94.0/24 dev docker0 proto kernel scope link src 172.17.94.1
192.168.57.0/24 dev enp0s3 proto kernel scope link src 192.168.57.3 metric 100
node2's route:
flannel0: 172.17.50.0/16
docker0: 172.17.50.1/24
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.3.2 0.0.0.0 UG 0 0 0 enp0s8
10.0.3.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s8
169.254.0.0 0.0.0.0 255.255.0.0 U 1003 0 0 enp0s8
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 flannel0
172.17.0.0 0.0.0.0 255.255.0.0 U 1 0 0 flannel0
172.17.50.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.58.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
and
default via 10.0.3.2 dev enp0s8
10.0.3.0/24 dev enp0s8 proto kernel scope link src 10.0.3.17
169.254.0.0/16 dev enp0s8 scope link metric 1003
172.17.0.0/16 dev flannel0
172.17.0.0/16 dev flannel0 scope link metric 1
172.17.50.0/24 dev docker0 proto kernel scope link src 172.17.50.1
192.168.58.0/24 dev enp0s3 proto kernel scope link src 192.168.58.2 metric 100
Then, Ping node2's docker ip(172.17.50.1 ) on node1 is not ok, Ping node1's docker ip(172.17.94.1) is not ok either. Use tcpdump to see the network, found maybe the network is not config ok.The source ip and des ip should be 192.168.57.3(through enp0s3) but not 10.0.3.16(through enp0s8):
I don't know why the nodes can't visit througt flannel.Hoping for help, thanks.
My host is MAC OS. In Virtualbox, I create a host-only network whose name is "vboxnet0", where adapter IPv4 address is 192.168.56.1/24, IPv6 disabled and DHCP server disabled.
I have a ubuntu server VM. I enable 2 network adapters for the VM. Adapter1 uses NAT with eth0, Adapter2 uses host-only network "vboxnet0" with eth1. In /etc/network/interfaces, I added the following commands:
auto eth1
iface eth1 inet static
address 192.168.56.50
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
From the host, I can ping the VM; However I can't ping from VM to the host.
Host's routing table
Destination Gateway Flags Refs Use Netif Expire
default 192.168.0.1 UGSc 38 0 en1
127 127.0.0.1 UCS 0 0 lo0
127.0.0.1 127.0.0.1 UH 9 169482 lo0
169.254 link#5 UCS 0 0 en1
192.168.0 link#5 UCS 0 0 en1
192.168.0.1/32 link#5 UCS 1 0 en1
192.168.0.1 84:94:8c:91:1a:f2 UHLWIir 40 25 en1 1194
192.168.0.15/32 link#5 UCS 0 0 en1
192.168.56 link#11 UC 2 0 vboxnet
192.168.56.1 a:0:27:0:0:0 UHLWIi 1 76 lo0
192.168.56.50 8:0:27:9d:5:77 UHLWI 0 5 vboxnet 1084
VM's routing table:
Destination Gateway Genmask Flags Metric Ref Use Iface
default 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
10.0.2.0 * 255.255.255.0 U 0 0 0 eth0
192.168.56.0 * 255.255.255.0 U 0 0 0 eth1
VM's arp table:
Address HWtype HWaddress Flags Mask Iface
192.168.56.1 ether 0a:00:27:00:00:00 C eth1
10.0.2.2 ether 52:54:00:12:35:02 C eth0
192.168.56.1's mac address is the same as the configuration on the host. This info means ARP works.
Start wireshark to listen to the interface "vboxnet0" on the host, I can see ARP received and ICMPs received on the host. ICMP packet says:"Expert Info (Warn/Sequence): No response seen to ICMP request in frame 14" (I can't put the screenshot because of lack of reputation)
Firewall settings.
(I know this is an old question, but I hope this will help anyone reading)
I'm not sure about the firewall setting on MAC OS. But on Windows 10, when I can't ping from a Host to VM but can ping from VM to Host. This is caused by Outbound Firewall rule.
If you don't know which part of which device's firewall to configure, start by disabling the whole thing and go from there.