I can't see istio's traffic in Envoy outbound port 15001 or Envoy inbound port 15006 - istio

After deploy the sample application,I want to see the life of a Packet in ISTIO According to this article.
After I execute the below command, I get nothing.
tcpdump -i calib5d7dbd52bc port 15006 -A
The calib5d7dbd52bc is a veth pair of the pod productpage-v1
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on calib5d7dbd52bc, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
>
> 0 packets dropped by kernel

Related

pkt-gen dpdk not sending any packets issue

i am trying to send udp packets from the dpdk machine using pktgen-dpdk to the DUT. but i cannot see any thing sent in the stats and also nothing received on the DUT.
here is the configuration i am using :
dpdk verion: DPDK 20.11.0
pkt-gen version: 20.11.3
ena driver version: 2.4.0
os :amazon linux 2 aws ec2 instance
the pkt-gen pkt file is as following:
stop 0
set 0 dst mac 02:EC:BC:CD:C7:D6 # i try both dst mac address for the gateway and for the DUT
set 0 src ip 192.168.2.187/24
set 0 dst ip 192.168.2.197
set 0 sport 22
set 0 dport 22
set 0 type ipv4
set 0 proto tcp
set 0 size 64
start 0
i also tried multiple diffrent protocols and even a simple icmp by enabling icmp for the port and using ping4 but nothing is sent. the port status is as follwoing:
port o status
in addition to that when i try to use testpmd to send traffic
i get this :
---------------------- Forward statistics for port 0 ----------------------
RX-packets: 11 RX-dropped: 0 RX-total: 11
TX-packets: 231819494 TX-dropped: 2029505748 TX-total: 2261325242
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
RX-packets: 11 RX-dropped: 0 RX-total: 11
TX-packets: 231819494 TX-dropped: 2029505748 TX-total: 2261325242
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
seems all transmissions are immediately dropped
Issue lies in pkt-gen not able identify the DPDK NIC ENA, because it is build in shared library mode. To fix the issue
identify the folder where the ENA PMD is located with find / -name "librte_*.so" | grep ena
set the environment with path with export LD_LIBRARY_PATH=[path to pmd folder]
make sure to run pktgen with arguments -d librte_net_ena.so -l 1-3 -- -P -m "2.0, 3.1"
note: solved the issue via live debug too.

Troubleshooting mount.nfs: Connection timed out for Centos 7 machines

Can somebody help me troubleshoot setting up NFS share between two Centos 7 machines?
https://www.howtoforge.com/nfs-server-and-client-on-centos-7
https://www.unixmen.com/setting-nfs-server-client-centos-7/
I have configured the firewall and the server is working fine, I can mount the shared folder from the different (third) Centos 7 machine.
However, on this other client machine, let's call it 111.111.111.111 I cannot mount:
`mount -t nfs 255.255.255.255:/var/nfsshare /some/existing/folder`
(I get mount.nfs: Connection timed )
When I run tcpdump alongside, I get:
[root#111.111.111.111 ~]# tcpdump -i eth0 -n host 255.255.255.255
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
13:45:35.795666 IP 111.111.111.111.1015 > 255.255.255.255.nfs: Flags [S], seq 221559787, win 29200, options [mss 1460,sackOK,TS val 2467213240 ecr 0,nop,wscale 7], length 0
13:45:36.797428 IP 111.111.111.111.1015 > 255.255.255.255.nfs: Flags [S], seq 221559787, win 29200, options [mss 1460,sackOK,TS val 2467214242 ecr 0,nop,wscale 7], length 0
...
The client CAN ping the server.
rpcinfo -p 161.53.19.149
gives:
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection timed out
However, I can telnet from the client to both 111 and 2049 ports.
From what I've read this should be a firewall issue, but apparently it is not, as it doesn't work even if I disable the firewall on the server (or even at the client).
How should I troubleshoot this next?
Here's the best workbook I've found for troubleshooting NFS connections:
https://docs.oracle.com/cd/E23824_01/html/821-1454/rfsadmin-215.html
Follow those instructions slowly and carefully and they should turn up the problem. That doc is a good example of a step-by-step troubleshooting where you check all the connectivity prerequisites before checking the actual service you're trying to test.
Here's some additional info that may help:
Your network sniff output is simple - the server isn't responding to you on the NFS TCP port. I hope the server's IP isn't really 255.255.255.255, since that's a broadcast address and is unlikely to work reliably.
You may have dropped all the firewalls, but the NFS server has its own permissions control, in the /etc/exports file according to the HowToForge link that you were following. You need to specify ALL the clients, not just a single IP address. You can also use a network range that includes all the clients. "man 5 exports" should tell you more about how to edit this file. Please DON'T put in "*" to match all IP addresses as suggested in the HowToForge link, that is generally a bad idea.
portmapper might be using the TCP wrappers permissions files - /etc/hosts.deny and /etc/hosts.allow - see "man 5 hosts_access" for the format of these files.
look in the syslog files for the IP address of the client to see if there are any messages about that client.
Even though you think you turned the firewall off, run "iptables -vL" to see if there are any rules you overlooked and whether they have any hits.
If you have custom MTU settings on any of the machines (for example, on storage-specific LANs people often set up jumbo packets) make sure that there are no mismatches. This is unlikely to happen on a home network.
Your sniff shows the client is attempting to connect via TCP to the nfs port 2048, it's possible the client is configured for NFSv4 and the server is configured for NFSv3 or lower. You might see this with the rpcinfo command, since it shows the versions of NFS supported by the server.

Squid proxy at ubuntu 18.04 impossible to connect

I'm new with Google Cloud so I may explain not precise.\
I have VM with Ubuntu 18.04 at Google Cloud Platform and I have installed Squid 3 proxy server on it.
Proxy is already configured a little.
http_port 3128 transparent
auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid/passwd
auth_param basic children 2
auth_param basic realm My Proxy Server
auth_param basic credentialsttl 24 hours
auth_params basic casesensitive off
#add acl rules
acl users proxy_auth REQUIRED
#http access rules
http_access deny !users
http_access allow users
In Google console I can see server's outer IP address but It does not work through it.
The ifconfig command shows next
ens4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1460
inet 10.156.0.3 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::4001:aff:fe9c:3 prefixlen 64 scopeid 0x20<link>
ether 42:01:0a:9c:00:03 txqueuelen 1000 (Ethernet)
RX packets 104399 bytes 83418274 (83.4 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 93840 bytes 12598292 (12.5 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 16697 bytes 1149429 (1.1 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 16697 bytes 1149429 (1.1 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
where inet 10.156.0.3 means my inner IP.
I suppose I don't understand some simple rule of work with Google Platform or just with proxy configuration.
May you show me where I'm wrong?
Thank you.
To solve your issue you need to check with nmap which ports are open on your VM and if 3128 is closed set Network tag for your VM and add firewall rule to allow access to it.
I've tried to replicate your issue on my test VM:
create VM instance or use existing one
install Squid
check if Squid is running:
$ sudo systemctl status squid
● squid.service - LSB: Squid HTTP Proxy version 3.x
Loaded: loaded (/etc/init.d/squid; generated)
**Active: active (running)** since Wed 2020-02-19 11:47:50 UTC; 26s ago
check accessibility to Squid with nmap:
$ nmap -Pn 35.XXX.155.XXX
Starting Nmap 7.80 ( https://nmap.org ) at 2020-02-19 12:53 CET
...
Host is up (0.023s latency).
Not shown: 996 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3389/tcp closed ms-wbt-server
8000/tcp closed http-alt
8081/tcp closed blackice-icecap
Squid is not available
edit VM and set Network tag proxy-server
add firewall rule to enable connections to Squid by using Network tag:
$ gcloud compute --project=test-prj firewall-rules create proxy-server-rule --direction=INGRESS --priority=999 --network=default --action=ALLOW --rules=tcp:3128 --source-ranges=0.0.0.0/0 --target-tags=proxy-server
check accessibility to Squid with nmap again
$ nmap -Pn 35.XXX.155.XXX
Starting Nmap 7.80 ( https://nmap.org ) at 2020-02-19 12:53 CET
...
Host is up (0.022s latency).
Not shown: 995 filtered ports
PORT STATE SERVICE
22/tcp open ssh
3128/tcp open squid-http
3389/tcp closed ms-wbt-server
8000/tcp closed http-alt
8081/tcp closed blackice-icecap
now Squid is ready to use.

UDP socket between c++ program and netcat [duplicate]

I noticed a strange behaviour working with netcat and UDP. I start an instance (instance 1) of netcat that listens on a UDP port:
nc -lu -p 10000
So i launch another instance of netcat (instance 2) and try to send datagrams to my process:
nc -u 127.0.0.1 10000
I see the datagrams. But if i close instance 2 and relaunch again netcat (instance 3):
nc -u 127.0.0.1 10000
i can't see datagrams on instance 1's terminal. Obsiously the operating system assigns a different UDP source port at the instance 3 respect to instance 2 and the problem is there: if i use the same instance'2 source port (example 50000):
nc -u -p 50000 127.0.0.1 10000
again the instance 1 of netcat receives the datagrams. UDP is a connection less protocol so, why? Is this a standard netcat behaviour?
When nc is listening to a UDP socket, it 'locks on' to the source port and source IP of the first packet it receives. Check out this trace:
socket(PF_INET, SOCK_DGRAM, IPPROTO_UDP) = 3
setsockopt(3, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
bind(3, {sa_family=AF_INET, sin_port=htons(10000), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
recvfrom(3, "f\n", 2048, MSG_PEEK, {sa_family=AF_INET, sin_port=htons(52832), sin_addr=inet_addr("127.0.0.1")}, [16]) = 2
connect(3, {sa_family=AF_INET, sin_port=htons(52832), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
Here you can see that it created a UDP socket, set it for address reuse, and bound it to port 10,000. As soon as it received its first datagram (from port 52,832), it issued a connect system call 'connecting' it to the 127.0.0.1:52,832. For UDP, a connect rejects all packets that don't match the IP and port in the connect.
Use the -k option:
nc -l -u -k 0.0.0.0 10000
-k means keep-alive, that netcat keeps listening after each connection
-u means UDP
-l listening on port 10000
Having given up on netcat on my OS version this is pretty short and gets the job done:
#!/usr/bin/ruby
# Receive UDP packets bound for a port and output them
require 'socket'
require 'yaml'
unless ARGV.count == 2
puts "Usage: #{$0} listen_ip port_number"
exit(1)
end
listen_ip = ARGV[0]
port = ARGV[1].to_i
u1 = UDPSocket.new
u1.bind(listen_ip, port)
while true
mesg, addr = u1.recvfrom(100000)
puts mesg
end
As the accepted answer explains, ncat appears not to support --keep-open with the UDP protocol. However, the error message which it prints hints at a workaround:
Ncat: UDP mode does not support the -k or --keep-open options, except with --exec or --sh-exec. QUITTING.
Simply adding --exec /bin/cat allows --keep-open to be used. Both input and output will be connected to /bin/cat, with the effect of turning it an "echo server" because whatever the client sends will be copied back to it.
To do something more useful with the input, we can use the shell's redirection operators (thus requiring --sh-exec instead of --exec). To see the data on the terminal, this works:
ncat -k -l -u -p 12345 --sh-exec "cat > /proc/$$/fd/1"
Caveat: the above example sends data to the stdout of ncat's parent shell, which could be confusing if combined with additional redirections. To simply append all output to a file is more straightforward:
ncat -k -l -u -p 12345 --sh-exec "cat >> ncat.out"

ping: http://google.com: Name or service not known [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 months ago.
The community reviewed whether to reopen this question 5 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I'm using centos7 in virtualbox on windows. And vagrant made it, got ping error with http or https. also curl. someone can help me how to fix it and let it work.
[root#localhost ~]# ping google.com
PING google.com (61.91.161.217) 56(84) bytes of data.
64 bytes from chatenabled.mail.google.com (61.91.161.217): icmp_seq=1 ttl=43 time=404 ms
64 bytes from chatenabled.mail.google.com (61.91.161.217): icmp_seq=2 ttl=43 time=408 ms
64 bytes from chatenabled.mail.google.com (61.91.161.217): icmp_seq=3 ttl=43 time=407 ms
64 bytes from chatenabled.mail.google.com (61.91.161.217): icmp_seq=4 ttl=43 time=408 ms
^C
--- google.com ping statistics ---
5 packets transmitted, 4 received, 20% packet loss, time 4000ms
rtt min/avg/max/mdev = 404.297/407.234/408.956/1.887 ms
[root#localhost ~]# ping https://google.com
ping: https://google.com: Name or service not known
[root#localhost ~]# ping https://61.91.161.217
ping: https://61.91.161.217: Name or service not known
`
resolv.conf
[root#localhost ~]# cat /etc/resolv.conf
nameserver 10.0.2.3
nameserver 8.8.8.8
nameserver 8.8.4.4
search localhost
`
ifconfig
[root#localhost ~]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.0.2.15 netmask 255.255.255.0 broadcast 10.0.2.255
inet6 fe80::5054:ff:fe73:fb1 prefixlen 64 scopeid 0x20<link>
ether 52:54:00:73:0f:b1 txqueuelen 1000 (Ethernet)
RX packets 610587 bytes 48453952 (46.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 468759 bytes 41290880 (39.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.33.10 netmask 255.255.255.0 broadcast 192.168.33.255
inet6 fe80::a00:27ff:fe0e:ae16 prefixlen 64 scopeid 0x20<link>
ether 08:00:27:0e:ae:16 txqueuelen 1000 (Ethernet)
RX packets 3069145 bytes 2674132747 (2.4 GiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2531212 bytes 213727091 (203.8 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
network file automatically created from vagrant
[root#localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
NAME="eth0"
ONBOOT=yes
NETBOOT=yes
UUID="704aa015-53dd-4ba7-9689-b9b8bf6e09a5"
IPV6INIT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
PEERDNS=yes
PEERROUTES=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
HWADDR=52:54:00:73:0f:b1
DNS1=8.8.8.8
[root#localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
NM_CONTROLLED=no
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.33.10
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
DNS1=8.8.8.8
First make sure you have your network adapter enabled in virtual box settings.
Your network interface which you use it for connecting to internet might not be active.
To check this,use "sudo nmcli d" command.
If it is disconnected,use "sudo nmtui"->Edit a connection,select your network interface and choose "Automatically connect" option(by
pressing Space key) and select OK.
Do "sudo reboot now" After logging in,do "ping www.google.com".
You should be able to connect now.
ping has nothing to do with HTTP or HTTPS:
Ping will use ICMP protocol, it belongs to TCP/IP
Ping operates by sending Internet Control Message Protocol (ICMP) echo request packets to the target host and waiting for an ICMP response.
Actually ping works at a much lower level than HTTP or HTTPS, and only accepts hostnames, not URLs.
change the VirtualBOx network card
use -> Intel PRO/1000 T Server (82543GC)
I tried a lot of different solutions. Changed resolve.conf a billion times. In the end I just needed to restart the router lol. Solved for me I hope it does the same for you.
There is another possibility,If you are a cloned machine, please check the UUID of the server NIC,The UUID is the same, and this will also happen, please delete the UUID。
eg:CentOS 7
cat /etc/sysconfig/network-scripts/ifcfg-ens192
...
UUID=03da7500-2101-c722-2438-xxxxxxx
...
If you are able to ping all the network devices and only facing issue like - ping: http://google.com: Name or service not known , then you should try to remove all the lines and try to put only one nameserver in /etc/resolv.conf . enter image description here
Okay
I tried so many times with all different methods.
But in the end what worked was that my linux system was connected to Internet. I changed to NAT and it worked.
Check /etc/nsswitch.conf and remove the # from below line
networks: files #dns