Why doesn´t my program send ARP requests (c++)? - c++

I am learning low level sockets with c++. I have done a simple program that shall send an ARP request. The socket seems to send the packet but I cannot catch it with Wireshark. I have another small program that also sends ARP packets and those packets are captured by Wireshark (my program below is inspired by that program).
Have I done something wrong?
Removed code
EDIT
Removed code
EDIT 2
It seems that I also need to include ethernet header data in the packet, so I now make a packet containing ethernet header and ARP header data. Now the packet goes away and is captured by Wireshark. But Wireshark says it is gratuitous. As you can see, nor IP or MAC address of sender and receiver seem to have been set properly.
36 13.318179 Cimsys_33:44:55 Broadcast ARP 42 Gratuitous ARP for <No address> (Request)
EDIT 3
/*Fill arp header data*/
p.arp.ea_hdr.ar_hrd = htons(ARPHRD_ETHER);
p.arp.ea_hdr.ar_pro = htons(ETH_P_IP);
p.arp.ea_hdr.ar_hln = ETH_ALEN; // Must be pure INTEGER, not called with htons(), as I did
p.arp.ea_hdr.ar_pln = 4; // Must be pure INTEGER, not called with htons(), as I did
p.arp.ea_hdr.ar_op = htons(ETH_P_ARP);

This code does not look quite right:
struct in_addr *s_in_addr = (in_addr*)malloc(sizeof(struct in_addr));
struct in_addr *t_in_addr = (in_addr*)malloc(sizeof(struct in_addr));
s_in_addr->s_addr = inet_addr("192.168.1.5"); // source ip
t_in_addr->s_addr = inet_addr("192.168.1.6"); // target ip
memcpy(arp->arp_spa, &s_in_addr, 6);
memcpy(arp->arp_tpa, &t_in_addr, 6);
In the memcpy You care copying 6 bytes out. However, you are taking an address of a pointer type, which makes it a pointer to a pointer type. I think you meant to just pass in s_in_addr and t_in_addr.
Edit: Alan Curry notes that you are copying 6 bytes from and to objects that are only 4 bytes long.
However, it doesn't seem like the dynamic allocation is doing your code any good, you should just create the the s_in_addr and t_in_addr variables off the stack. Than, you would not need to change your memcpy code.
struct in_addr s_in_addr;
struct in_addr t_in_addr;
s_in_addr.s_addr = inet_addr("192.168.1.5"); // source ip
t_in_addr.s_addr = inet_addr("192.168.1.6"); // target ip
memcpy(arp->arp_spa, &s_in_addr, sizeof(arg->arp_spa));
memcpy(arp->arp_tpa, &t_in_addr, sizeof(arg->arg_tpa));
There is a similar problem with your arp packet itself. So you should allocate it off the stack. To prevent myself from making a lot of code changes, I'll illustrate it slightly differently:
struct ether_arp arp_packet;
struct ether_arp *arp = &arp_packet;
//...
for(int i = 0; i < 10; i++) {
if (sendto(sock, arp, sizeof(arp_packet), 0,
(struct sockaddr *)&sending_socket,
sizeof(sending_socket)) < 0) {
std::cout << "Could not send!" << std::endl;
}
}

#user315052 say that you should use memcpy(arp->arp_spa, &s_in_addr, sizeof(arg->arp_spa)); but this code just copy first 4 bytes of s_in_addr to arp->arp_spa that absolutely do nothing!
so just try this:
* (int32_t*) arp->arp_spa = inet_addr("192.168.1.1")
* (int32_t*) arp->arp_tpa = inet_addr("192.168.1.2")

Related

edited: accessing a method in a running c++ program

I need to make a statistical printout of a socket program.
I am using method Listen(uint32_t port) in c++ thread to listen to clients on the specified port (more than one) and send/receive client's transactions to/from a server.
Now i need to write a log file of how many packet received/sent by this method.
my implementation is shown in the skeleton below:
hub.cpp
//set up necessary header
#include <iostream>
....
#include <vector>
//global variables
std::map<uint32_t,long> * received_pk;
std::map<uint32_t,long> * sent_pk;
void Listen(uint32_t port ); // method
int main (int argc, char **argv){
//set up client ports
vector<uint32_t> client_ports;
client_ports.push_back(50002);
client_ports.push_back(50003);
//initialize variables
received_pk = new std::map<uint32_t,uint32_t>();
sent_pk = new std::map<uint32_t,uint32_t>();
for(uint32_t i=0;i<client_ports.size();i++){
received_pk->insert(std::pair<uint32_t,uint32_t>(client_ports.at(i),0) );
sent_pk->insert(std::pair<uint32_t,uint32_t>(client_ports.at(i),0) );
}
//set up thread
vector<thread*> threads;
for(uint32_t i=0;i<client_ports.size();i++){
cout << "Create Listener in port " << client_ports.at(i) << endl;
threads.push_back(new thread(Listen,client_ports.at(i)));
}
//Wait for the threads to finish
for(uint32_t i=0;i<client_ports.size();i++){
threads.at(i)->join();
}
}
void Listen(uint32_t port){
...
set up struct sockaddr_in client, host;
listen on port: port
...
while(1){
receive packet from client;
received_pk->at(port)++;
check packet type
if(packet==status packet){
update the packet id number
}
if (packet==transaction){
send packet to Server
receive reply
send reply back to client
sent_pk->at(port)++;
}
}
}
Now i need to access received_pk and sent_pk while hub.cpp is still running (probably in the while loop)
I thought of two options:
Access received_pk and sent_pk from an external program: like define a method that can get the packet information while the thread is till running
problem: I don't know if i can access a variable/method while program is executing .
or print received_pk and sent_pk to a log file every 5 seconds.
problem: I don't know if it makes sense to have a timer in the multiple thread.
Please any advice will be appreciated.
Kehinde
Quite possibly, the easiest solution is to put the data in shared memory. The map x is a bit suspect - did you mean std::map<Key, Value>? That doesn't fit well in shared memory. Instead, use simple arrays. There are just 64K ports, and sizeof(long long[65536]) isn't excessive.

Faster processing of SendARP function

Originally posted here, but found off topic: https://serverfault.com/questions/617459/faster-processing-of-sendarp-function
I've been working on a network scanner for Windows. I have successfully written the code, but the problem is it takes too much time to scan the hosts that aren't up. When I tried scanning a subnet (1 to 255), it took more than half hour. I couldn't find a function to control the time limit or a way to control the time-out of SendARP function.
DestIp = inet_addr(strn.c_str()); //Setting Destination IPv4 dotted-decimal address into a proper address for the IN_ADDR structure.
SrcIp = inet_addr(SrcIpString);
memset(&MacAddr, 0xff, sizeof(MacAddr)); //Initializing MAC Address to ff-ff-ff-ff-ff-ff
dwRetVal = SendARP(DestIp, SrcIp, &MacAddr, &PhysAddrLen); //Sending ARP request to the destined IP Address
if (dwRetVal == NO_ERROR) {
bPhysAddr = (BYTE *)& MacAddr;
if (PhysAddrLen) {
std::cout << strn<<std::endl;
for (int i = 0; i < (int)PhysAddrLen; i++) {
if (i == ((int)PhysAddrLen - 1))
printf("%.2X\n", (int)bPhysAddr[i]);
else
printf("%.2X-", (int)bPhysAddr[i]);
}
}
}
You're using a convenience function from the "IP Helper" library. That's not performance-oriented.
The ServerFault comments actually hit the mail on the head: use threads. With <thread> that's nowadays quite simple. Just do 255 std::async calls to your function. Of course, make sure that all the MacAddr and PhysAddrLen references aren't invalidated.

ProtocolBuffer can't define data although it is received

I'm trying to use Google's ProtocolBuffer to send/receive data in a server/client architecture. I am able to connect the two with winsock and I am able to send and receive data from ProtocolBuffer with it but there seems to be a problem with deserializing the data on the server (I haven't done the opposite so I can't tell if that works yet)
Here's the code I use to create a sample packet of data to send to the server:
MessageID *message = new MessageID();
message->set_type(MessageID::Type::MessageID_Type_PLAYERDATA);
PlayerData::Vector2 *position = new PlayerData::Vector2();
position->set_x(10.0f);
position->set_y(25.0f);
PlayerData *data = new PlayerData();
data->set_health((UINT32)50);
data->set_allocated_position(position);
auto inventory = data->add_items();
inventory->set_itemid((INT32)1);
inventory->set_count((INT32)5);
message->set_allocated_playerdata(data);
m_pNetworkObject->Send(message);
This is the code that actually sends the data over a TCP connection:
// Serialize to string.
std::string sendData;
data->SerializeToString(&sendData);
// Convert data to const char*
const char* dataToSend = sendData.c_str();
int iResult;
iResult = send( ConnectSocket, dataToSend, data->ByteSize(), 0 );
if (iResult == SOCKET_ERROR)
{
printf("send failed with error: %d\n", WSAGetLastError());
closesocket(ConnectSocket);
WSACleanup();
exit(1);
}
So, this was code that is run on the client. Now, when I receive this on the server I get the following errors from ProtocolBuffer:
Can't parse message of type "MessageID" because it is missing required fields: type
Can't parse message of type "PlayerData" because it is missing required fields: Position, Health
The code on the server that decompiles the winsock data is like this:
// decompile to a proto file and check ID
MessageID* receivedData = new MessageID();
receivedData->ParseFromArray(&recv, strlen(recv));
// check and redeserialize
switch (receivedData->type())
{
case MessageID::Type::MessageID_Type_PLAYERDATA:
{
PlayerData* playerData = new PlayerData();
playerData->ParseFromArray(&recv, strlen(recv));
UsePlayerData(sock, playerData);
break;
}
case MessageID::Type::MessageID_Type_WORLDDATA:
{
WorldData* worldData = new WorldData();
worldData->ParseFromArray(&recv, strlen(recv));
UseWorldData(sock, worldData);
break;
}
}
The weird thing is that, although the error says that the data doesn't contain the Type value it does set it to 1 and the switch statement works. After that when I try to take a look at the Position and Health data everything is 0.
I have no idea what I'm doing wrong. I read most of the ProtocolBuffer tutorials and help but it seems to no avail.
EDIT:
I tried serializing and then deserializing the data in the same application (without the networking involved) and with the following code it worked to do just that:
std::string sendData;
message->SerializeToString(&sendData);
MessageID* receivedData = new MessageID();
receivedData->ParseFromString(sendData);
If I check the data in receivedData it is exactly the same (so, the right values) as in the original object "message"
Your problem is that you are using strlen() to find the length of the message. strlen() just looks for the first zero-valued byte and assumes that's the end. This is the convention for text strings, but does not apply to binary messages like Protocol Buffers. In this case, strlen() is returning a size that is too short and so some fields are missing from the message, but the opposite can happen too -- strlen() can run off the end of the buffer and return a size that is too long, or even crash your program.
So, you need to make sure the actual exact size of the message is communicated from the sender to the receiver.

Boost asio tcp socket available reports incorrect number of bytes

In SSL client server model, I use the code below to read data from the socket on either client or server side.
I only read data when there is data available. To know when there is data available, I check the available() method on the lowest_layer() of the asio::ssl::stream.
After I send 380 bytes from the client to the server and enter the read method on the server, I see the following.
‘s’ is the buffer I supplied.
‘n’ is the size of the buffer I supplied.
‘a1’ is the result of available() before the read and will report 458 bytes.
‘r’ is the number of bytes actually read. It will report 380, which is correct.
‘a2’ is the result of available() after the read and will report 0 bytes. This is what I expect, since my client sent 380 bytes and I have read them all.
Why does the first call to available() report too many bytes?
Types:
/**
* Type used as SSL Socket. Handles SSL and socket functionality.
*/
typedef boost::asio::ssl::stream<boost::asio::ip::tcp::socket> SslSocket;
/**
* A shared pointer version of the SSL Socket type.
*/
typedef boost::shared_ptr<SslSocket> ShpSslSocket;
Members:
ShpSslSocket m_shpSecureSocket;
Part of the read method:
std::size_t a1 = 0;
if ((a1 = m_shpSecureSocket->lowest_layer().available()) > 0)
{
r += boost::asio::read(*m_shpSecureSocket,
boost::asio::buffer(s, n),
boost::asio::transfer_at_least(1));
}
std::size_t a2 = m_shpSecureSocket->lowest_layer().available();
Added info:
So I changed my read method to more thoroughly check if there is still data available to be read from the boost::asio::ssl::stream. Not only do I need to check if there is data available on the socket level, but there may also be data stuck in the OpenSSL buffers somewhere. SSL_peek does the trick. Next to checking for available data, it also checks the TCP port status and does all this as long as there is no timeout.
Here is the complete read method of the boost::iostreams::device class that I created.
std::streamsize SslClientSocketDevice::read(char* s, std::streamsize n)
{
// Request from the stream/device to receive/read bytes.
std::streamsize r = 0;
LIB_PROCESS::TcpState eActualState = LIB_PROCESS::TCP_NOT_EXIST;
char chSslPeekBuf; // 1 byte peek buffer
// Check that there is data available. If not, wait for it.
// Check is on the lowest layer (tcp). In that layer the data is encrypted.
// The number of encrypted bytes is most often different than the number
// of unencrypted bytes that would be read from the secure socket.
// Also: Data may be read by OpenSSL from the socket and remain in an
// OpenSSL buffer somewhere. We also check that.
boost::posix_time::ptime start = BOOST_UTC_NOW;
int nSslPeek = 0;
std::size_t nAvailTcp = 0;
while ((*m_shpConnected) &&
(LIB_PROCESS::IpMonitor::CheckPortStatusEquals(GetLocalEndPoint(),
GetRemoteEndPoint(),
ms_ciAllowedStates,
eActualState)) &&
((nAvailTcp = m_shpSecureSocket->lowest_layer().available()) == 0) &&
((nSslPeek = SSL_peek(m_shpSecureSocket->native_handle(), &chSslPeekBuf, 1)) <= 0) && // May return error (<0) as well
((start + m_oReadTimeout) > BOOST_UTC_NOW))
{
boost::this_thread::sleep(boost::posix_time::millisec(10));
}
// Always read data when there is data available, even if the state is no longer valid.
// Data may be reported by the TCP socket (num encrypted bytes) or have already been read
// by SSL and not yet returned to us.
// Remote party can have sent data and have closed the socket immediately.
if ((nAvailTcp > 0) || (nSslPeek > 0))
{
r += boost::asio::read(*m_shpSecureSocket,
boost::asio::buffer(s, n),
boost::asio::transfer_at_least(1));
}
// Close socket when state is not valid.
if ((eActualState & ms_ciAllowedStates) == 0x00)
{
LOG4CXX_INFO(LOG4CXX_LOGGER, "TCP socket not/no longer connected. State is: " <<
LIB_PROCESS::IpMonitor::TcpStateToString(eActualState));
LOG4CXX_INFO(LOG4CXX_LOGGER, "Disconnecting socket.");
Disconnect();
}
if (! (*m_shpConnected))
{
if (r == 0)
{
r = -1; // Signal stream is closed if no data was retrieved.
ThrowExceptionStreamFFL("TCP socket not/no longer connected.");
}
}
return r;
}
So maybe I know why this is. It is an SSL connection and therfor the transfered bytes will be encrypted. Encrypted data may well be of a different size because of the block size. I guess that answers the question why the number of bytes available on TCP level is different than the number of bytes that comes out of a read.

Why do I not see MSG_EOR for SOCK_SEQPACKET on linux?

I have two processes which are communicating over a pair of sockets created with socketpair() and SOCK_SEQPACKET. Like this:
int ipc_sockets[2];
socketpair(PF_LOCAL, SOCK_SEQPACKET, 0, ipc_sockets);
As I understand it, I should see MSG_EOR in the msg_flags member of "struct msghdr" when receiving a SOCK_SEQPACKET record. I am setting MSG_EOR in sendmsg() to be certain that the record is marked MSG_EOR, but I do not see it when receiving in recvmsg(). I've even tried to set MSG_EOR in the msg_flags field before sending the record, but that made no difference at all.
I think I should see MSG_EOR unless the record was cut short by, e.g. a signal, but I do not. Why is that?
I've pasted my sending and receiving code in below.
Thanks,
jules
int
send_fd(int fd,
void *data,
const uint32_t len,
int fd_to_send,
uint32_t * const bytes_sent)
{
ssize_t n;
struct msghdr msg;
struct iovec iov;
memset(&msg, 0, sizeof(struct msghdr));
memset(&iov, 0, sizeof(struct iovec));
#ifdef HAVE_MSGHDR_MSG_CONTROL
union {
struct cmsghdr cm;
char control[CMSG_SPACE_SIZEOF_INT];
} control_un;
struct cmsghdr *cmptr;
msg.msg_control = control_un.control;
msg.msg_controllen = sizeof(control_un.control);
memset(msg.msg_control, 0, sizeof(control_un.control));
cmptr = CMSG_FIRSTHDR(&msg);
cmptr->cmsg_len = CMSG_LEN(sizeof(int));
cmptr->cmsg_level = SOL_SOCKET;
cmptr->cmsg_type = SCM_RIGHTS;
*((int *) CMSG_DATA(cmptr)) = fd_to_send;
#else
msg.msg_accrights = (caddr_t) &fd_to_send;
msg.msg_accrightslen = sizeof(int);
#endif
msg.msg_name = NULL;
msg.msg_namelen = 0;
iov.iov_base = data;
iov.iov_len = len;
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
#ifdef __linux__
msg.msg_flags = MSG_EOR;
n = sendmsg(fd, &msg, MSG_EOR);
#elif defined __APPLE__
n = sendmsg(fd, &msg, 0); /* MSG_EOR is not supported on Mac
* OS X due to lack of
* SOCK_SEQPACKET support on
* socketpair() */
#endif
switch (n) {
case EMSGSIZE:
return EMSGSIZE;
case -1:
return 1;
default:
*bytes_sent = n;
}
return 0;
}
int
recv_fd(int fd,
void *buf,
const uint32_t len,
int *recvfd,
uint32_t * const bytes_recv)
{
struct msghdr msg;
struct iovec iov;
ssize_t n = 0;
#ifndef HAVE_MSGHDR_MSG_CONTROL
int newfd;
#endif
memset(&msg, 0, sizeof(struct msghdr));
memset(&iov, 0, sizeof(struct iovec));
#ifdef HAVE_MSGHDR_MSG_CONTROL
union {
struct cmsghdr cm;
char control[CMSG_SPACE_SIZEOF_INT];
} control_un;
struct cmsghdr *cmptr;
msg.msg_control = control_un.control;
msg.msg_controllen = sizeof(control_un.control);
memset(msg.msg_control, 0, sizeof(control_un.control));
#else
msg.msg_accrights = (caddr_t) &newfd;
msg.msg_accrightslen = sizeof(int);
#endif
msg.msg_name = NULL;
msg.msg_namelen = 0;
iov.iov_base = buf;
iov.iov_len = len;
msg.msg_iov = &iov;
msg.msg_iovlen = 1;
if (recvfd)
*recvfd = -1;
n = recvmsg(fd, &msg, 0);
if (msg.msg_flags) { // <== I should see MSG_EOR here if the entire record was received
return 1;
}
if (bytes_recv)
*bytes_recv = n;
switch (n) {
case 0:
*bytes_recv = 0;
return 0;
case -1:
return 1;
default:
break;
}
#ifdef HAVE_MSGHDR_MSG_CONTROL
if ((NULL != (cmptr = CMSG_FIRSTHDR(&msg)))
&& cmptr->cmsg_len == CMSG_LEN(sizeof(int))) {
if (SOL_SOCKET != cmptr->cmsg_level) {
return 0;
}
if (SCM_RIGHTS != cmptr->cmsg_type) {
return 0;
}
if (recvfd)
*recvfd = *((int *) CMSG_DATA(cmptr));
}
#else
if (recvfd && (sizeof(int) == msg.msg_accrightslen))
*recvfd = newfd;
#endif
return 0;
}
With SOCK_SEQPACKET unix domain sockets the only way for the message to be cut short is if the buffer you give to recvmsg() isn't big enough (and in that case you'll get MSG_TRUNC).
POSIX says that SOCK_SEQPACKET sockets must set MSG_EOR at the end of a record, but Linux unix domain sockets don't.
(Refs: POSIX 2008 2.10.10 says SOCK_SEQPACKET must support records, and 2.10.6 says record boundaries are visible to the receiver via the MSG_EOR flag.)
What a 'record' means for a given protocol is up to the implementation to define.
If Linux did implement MSG_EOR for unix domain sockets, I think the only sensible way would be to say that each packet was a record in itself, and so always set MSG_EOR (or maybe always set it when not setting MSG_TRUNC), so it wouldn't be informative anyway.
That's not what MSG_EOR is for.
Remember that the sockets API is an abstraction over a number of different protocols, including UNIX filesystem sockets, socketpairs, TCP, UDP, and many many different network protocols, including X.25 and some entirely forgotten ones.
MSG_EOR is to signal end of record where that makes sense for the underlying protocol. I.e. it is to pass a message to the next layer down that "this completes a record". This may affect for example, buffering, causing the flushing of a buffer. But if the protocol itself doesn't have a concept of a "record" there is no reason to expect the flag to be propagated.
Secondly, if using SEQPACKET you must read the entire message at once. If you do not the remainder will be discarded. That's documented. In particular, MSG_EOR is not a flag to tell you that this is the last part of the packet.
Advice: You are obviously writing a non-SEQPACKET version for use on MacOS. I suggest you dump the SEQPACKET version as it is only going to double the maintenance and coding burden. SOCK_STREAM is fine for all platforms.
When you read the docs, SOCK_SEQPACKET differs from SOCK_STREAM in two distinct ways. Firstly -
Sequenced, reliable, two-way connection-based data transmission path for datagrams of fixed maximum length; a consumer is required to read an entire packet with each input system call.
-- socket(2) from Linux manpages project
aka
For message-based sockets, such as SOCK_DGRAM and SOCK_SEQPACKET, the entire message shall be read in a single operation. If a message is too long to fit in the supplied buffers, and MSG_PEEK is not set in the flags argument, the excess bytes shall be discarded, and MSG_TRUNC shall be set in the msg_flags member of the msghdr structure.
-- recvmsg() in POSIX standard.
In this sense it is similar to SOCK_DGRAM.
Secondly each "datagram" (Linux) / "message" (POSIX) carries a flag called MSG_EOR.
However Linux SOCK_SEQPACKET for AF_UNIX does not implement MSG_EOR. The current docs do not match reality :-)
Allegedly some SOCK_SEQPACKET implementations do the other one. And some implement both. So that covers all the possible different combinations :-)
[1] Packet oriented protocols generally use packet level reads with
truncation / discard semantics and no MSG_EOR. X.25, Bluetooth, IRDA,
and Unix domain sockets use SOCK_SEQPACKET this way.
[2] Record oriented protocols generally use byte stream reads and MSG_EOR
no packet level visibility, no truncation / discard. DECNet and ISO TP use SOCK_SEQPACKET that way.
[3] Packet / record hybrids generally use SOCK_SEQPACKET with truncation /
discard semantics on the packet level, and record terminating packets
marked with MSG_EOR. SPX and XNS SPP use SOCK_SEQPACKET this way.
https://mailarchive.ietf.org/arch/msg/tsvwg/9pDzBOG1KQDzQ2wAul5vnAjrRkA
You've shown an example of paragraph 1.
Paragraph 2 also applies to SOCK_SEQPACKET as defined for SCTP. Although by default it sets MSG_EOR on every sendmsg(). The option to disable this is called SCTP_EXPLICIT_EOR.
Paragraph 3, the one most consistent with the docs, seems to be the most obscure case.
And even the docs are not properly consistent with themselves.
The SOCK_SEQPACKET socket type is similar to the SOCK_STREAM type, and is also connection-oriented. The only difference between these types is that record boundaries are maintained using the SOCK_SEQPACKET type. A record can be sent using one or more output operations and received using one or more input operations, but a single operation never transfers parts of more than one record. Record boundaries are visible to the receiver via the MSG_EOR flag in the received message flags returned by the recvmsg() function. -- POSIX standard