I was wondering what is the difference between using pkt.time and pkt[IP].time since they both give different times for the same packet.
I was also wondering how to interpret packet time such as 1430123453.564733
If anyone has an idea or knows where I can find such information it would be very helpful.
Thanks.
pkt.time gives you the epoch time that is included in the FRAME layer of the packet in wireshark.
Just after the notation pkt[IP].time gives you the time that is included in the IP layer of the packet in wireshark. But the IP layer has no time, so I don't think this command will work.
Related
I'm using Amazon's sample code to upload a rtsp stream from an IP camera to Kinesis video streams. Code found here: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp/blob/master/samples/kvs_gstreamer_sample.cpp
I would like to get the NTP time from the camera for each frame. My understanding is that the first step in doing this is reading the RTCP sender report to get the synchronizing time for the camera's RTP and NTP times.
To do that, I've added a callback on receiving RTCP packets like so:
g_signal_connect_after(session, "on-receiving-rtcp", G_CALLBACK(on_rtcp_callback), data);
Then, in my callback function after getting the SR packet from the buffer, I try to get the two timestamps:
gst_rtcp_packet_sr_get_sender_info (packet, ssrc, ntptime, rtptime, packet_count, octet_count);
When comparing the 'ntptime' and 'rtptime' variables I get here, with what I see in Wireshark, the rtp times match perfectly. However, the NTP time I get in my C++ code is very wrong, it shows a time from about a month ago, while the Wireshark packet shows an NTP time that appears correct.
Is there some setting causing gstreamer to overwrite the NTP time in my sender report packets, and if so, how do I disable that setting?
It turns out the NTP time provided gst_rtcp_packet_sr_get_sender_info is not in any format I've seen before. To convert to a meaningful timestamp you have to use gst_rtcp_ntp_to_unix which then gives you a unix time that actually makes some sense.
Here is the main problem.
I have 10-gigabit ethernet interface and current flow is 6-7 Gbit/sec
I need to implement a firewall Then I need to capture raw packets to filter some packets.
Simply I started to implement as a raw socket necessary codes are at below. Socket bound to a specific interface.
socketfd=socket(AF_PACKET,SOCK_RAW,htons(ETH_P_ALL));
strncpy(ifopts.ifr_name,interfaceName,IFNAMSIZ-1);
ioctl(socketfd,SIOCGIFINDEX,&ifopts);
sll.sll_family=AF_INET;
sll.sll_ifindex=ifopts.ifr_ifindex;
sll.sll_protocol=htons(ETH_PALL);
bind(socketfd,&sll,sizeof(sll));
Here is how I read and mtu size is 9000
while(true)
recvfrom(socketfd,buffer,9000,0,0,0);
Without any process on a packet I got ~150Mbit/sec.
This is the problem I need to solve. I realize that nload or ip -s link shows the actual rate; but I cannot reach these numbers around 6-7Gbit/sec.
~150Mbit/sec is so ridiculous rate for me. I need to increase performance as much as I can do using one CPU. I will try to use PF_INET, if you want I can share the result of it.
Here is the answer.
First of all capturing speed is not only depend on the size of bytes on the interface, but the number of packets is also important. So socket programming is also limited by the number of packets. I measured as 200k packets per second (pps).
Using better network driver is the one way of the increasing pps. PF_RING is the possible library and driver. You may use the trial version to test. I simply test it on my network and result is 14M pps. Then this rate is almost 10Gbit/sec. That's all I experienced.
Thanks all.
Hello members of stackoverflow
I am developing a project with DPDK but have encountered a silly issue, that is not obvious to me.
I want to find out the right approach to tackle my current problem.
I am sending and receiving 4kb rte_mbuf between remote and local node,
that alone works fine,
however when I combine the implementation with 3rd party library, the DPDK stops receiving the data after approximately receiving 8000 packets.
I have debugged all the possible program side and to my astonishment. there is no error found and all the packets that are within 8000 are received correctly.
I have no idea the approach to find out the problem of this situation. but the situation that I have mentioned can be replicated. It always stops at approximately 8000 packets received.
and there are absolutely no bugs found in dpdk (user interface). The only problem is the rte_rx_queue stops returning the packets after 8000 packets.
Would there be a good approach to identify the problem of this case?
Would there be a good approach to identify the problem of this case?
The best approach would be to start with the stats. Have a look at rte_eth_stats_get()
We need to check if any counter is increasing after the DPDK app has stopped. I bet the rx_nombuf counter is still increasing, which might mean your mempool has exhausted.
If we pass the mbuf to an external lib, we have to make sure that each mbuf is freed after return from the lib.
I wrote a simple server and client apps, where I can switch between TCP, DCCP and UDP protocols. The goal was to transfer a file from the one to the other and measure the traffic for each protocol, so I can compare them for different network setups (I know roughly what the result should be, but I need exact numbers/graphs). Anyway after starting both apps on different computers and starting tcpdump I only get in the tcpdump-log the first few MBs (~50MB) from my 4GB file. The apps are written in a standard C/C++ code, which could be found anywhere on the web.
What may be the problem or what could I be doing wrong here?
-- Edit
The command line I use is:
tcpdump -s 1500 -w mylog
tcpdump captures then packets only the first ~55 sec. That's the time the client needs to send the file to the socket. Afterwards it stops, even though the server continues receiving and writing the file to the hard drive.
-- Edit2
Source code:
client.cpp
server.cpp
common.hpp
common.cpp
-- Edit final
As many of you pointed out (and as I suspected) there were several misconceptions/bugs in the source code. After I cleaned it up (or almost rewrote it), it works as needed with tcpdump. I will accept the answer from #Laurent Parenteau but only for point 5. as it was the only relevant for the problem. If someone is interested in the correct code, here it is:
Source code edited
client.cpp
server.cpp
common.hpp
common.cpp
There are many things wrong in the code.
The file size / transfer size is hardcoded to 4294967295 bytes. So, if the file supplied isn't that many bytes, you'll have problems.
In the sender, you aren't checking if the file read is successful or not. So if the file is smaller than 4294967295 bytes, you won't know it and send junk data (or nothing at all) over the network.
When you use UDP and DDCP, the packets order isn't guarantee, so the data received may be out of order (ie. junk).
When you use UDP, there's no retransmission of lost packet, so some data may never be received.
In the receiver, you aren't check how many bytes you received, you always write MAX_LINE bytes to the file. So even if you receive 0 bytes, you'll still be writing to the file, which is wrong.
When you use UDP, since you're sending in a thigh loop, even if the write() call return the same amount of bytes sent that what you requested, a lot of data will probably be dropped by the network stack or the network interface, since there's no congestion control in place. So, you will need to put some congestion control in place yourself.
And this is just from a quick scan of the code, there is probably more problems in there...
My suggestion is :
Try the transfer with TCP, do a md5sum of the file you read/send, and a md5sum of the file you receive/save, and compare the 2 md5sum. Once you have this case working, you can move to testing (still using the md5sum comparison) with UDP and DCCP...
For the tcpdump command, you should change -s 1500 for -s 0, which means unlimited. With that tcpdump command, you can trust it that data not seen by it hasn't been sent/received. Another good thing to do is to compare the tcpdump output of the sender with the receiver. This way you'll know if some packet lost occurred between the two network stacks.
Do you have x term access? Switch to Wireshark instead and try with that - its free, open source, and probably more widely used than tcpdump today. (It was formerly known as Ethereal.)
Also, do try the following tcpdump options:
-xx print the link header and data of the packet as well (does -w write data?)
-C specify the max file size explicitly.
-U to write packet by packet to the file instead of flushing the buffer.
-p dont put the nic in promiscuous mode
-O dont use the packet matching optimizer as yours is a new app level protocol.
Are you using verbose output in tcpdump? This can make the buffers fill quickly so redirect stdout/err to a file when you run it.
Are these Gigabit ethernet card on both ends?
tcpdump is used as a diagnostic and forensics tool by 10s of thousands (at least) programmers and computer security professionals worldwide. When a tool like this seems to be mishandling a very common task the first thing to suspect is the code you wrote, and not the tool.
In this particular case your code has a wide variety of significant errors. In particular, with TCP, your server will continue to write data to the file regardless of whether or not the client is sending any.
This code has race conditions that will result in non-deterministic behavior in some situations, improperly treats '\0' as being a special value in network data, ignores error conditions, and ignores end-of-file conditions. And that's just a brief reading.
In this case I am nearly certain that tcpdump is functioning perfectly and telling you that your application does not do what you think it does.
"That's the time the client needs to
send the file to the socket.
Afterwards it stops, even though the
server continues receiving and writing
the file to the hard drive."
This sound really weird. The socket buffers are way too small to allow this to happen. I really think that your server code only seems to receive data, while the sender actually has already stopped sending data.
I know this might sound silly, but are you sure it is not a problem of flush() of the file? I.e. the data are still in memory and not yet written to disk (because they do not amount to a sufficient quantity).
Try sync or just wait a bit until you are certain that enough data have been transmitted.
So I'm almost done an assignment involving Win32 programming and sockets, but I have to generate and analyze some statistics about the transfers. The only part I'm having trouble with is how to figure out the number of packets that were sent to the server from the client.
The data sent can be variable-length, so I can't just divide the total bytes received by a #define'd value.
We have to use asynchronous calls to do everything, so I've been trying to increment a counter with every FD_READ message I get for the server's socket. However, because I have to be able to accept a potentially large file size, I have to call recv/recvfrom with a buffer size around 64k. If I send a small packet (a-z), there are no problems. But if I send a string of 1024 characters 10x, the server reports 2 or 3 packets received, but 0% data loss in terms of bytes sent/received.
Any idea how to get the number of packets?
Thanks in advance :)
This really boils down to what you mean by 'packet.'
As you are probably aware, when a TCP/UDP message is sent on the wire, the data being sent is 'wrapped,' or prepended, with a corresponding TCP/UDP header. This is then 'wrapped' in an IP header, which is in turn 'wrapped' in an Ethernet frame. You can see this breakout if you use a sniffing package like Wireshark.
The point is this. When I hear the term 'packet,' I think of data at the IP level. IP data is truly packetized on the wire, so packet counts make sense when talking about IP. However, if you're using regular sockets to send and receive your data, the IP headers, as well as the TCP/UDP headers, are stripped off, i.e., you don't get this information from the socket. And without that information, it is impossible to determine the number of 'packets' (again, I'm thinking IP) that were transmitted.
You could do what others are suggesting by adding your own header with a length and a counter. This information will help you accurately size your receive buffers, but it won't help you determine the number of packets (again, IP...), especially if you're doing TCP.
If you want to accurately determine the number of packets using Winsock sockets, I would suggest creating a 'raw' socket as suggested here. This socket will collect all IP traffic seen by your local NIC. Use the IP and TCP/UDP headers to filter the data based on your client and server sockets, i.e., IP addresses and port numbers. This will give an accurate picture of how many IP packets were actually used to transmit your data.
Not a direct answer to your question but rather a suggestion for a different solution.
What if you send a length-descriptor in front of the data you want to transfer? That way you can already allocate the correct buffer size (not too much, not too little) on the client and also check if there were any losses when the transfer is over.
With TCP you should have no problem at all because the protocol itself handles the error-free transmission or otherwise you should get a meaningful error.
Maybe with UDP you could also split up your transfer into fixed-size chunks with a propper sequence-id. You'd have to accumulate all incoming packages before you sort them (UDP makes no guarantee on the receive-order) and paste the data together.
On the other hand you should think about it if it is really necessary to support UDP as there is quite some manual overhead if you want to get that protocol error-safe... (see the Wikipedia Article on TCP for a list of the problems to get around)
Do your packets have a fixed header, or are you allowed to define your own. If you can define your own, include a packet counter in the header, along with the length. You'll have to keep a running total that accounts for rollover in your counter, but this will ensure you're counting packets sent, rather than packets received. For an simple assignment, you probably won't be encountering loss (with UDP, obviously) but if you were, a packet counter would make sure your statistics reflected the sent message accurately.