The problem is it only connects to one client instead of two. Can anyone help me figure out why?
Server:
#include <SFML/System.hpp>
#include <SFML/Network.hpp>
#include <iostream>
void sendInfo(void *UserData)
{
sf::IPAddress* ip = static_cast<sf::IPAddress*>(UserData);
// Print something...
while(true){
// Create the UDP socket
sf::SocketUDP Socket;
// Create bytes to send
char Buffer[] = "sending info.";
// Send data to "192.168.0.2" on port 4567
if (Socket.Send(Buffer, sizeof(Buffer), *ip, 4444) != sf::Socket::Done)
{
// Error...
}
}
}
void receiveInfo(void *userData)
{
// Print something...
while(true){
// Create the UDP socket
sf::SocketUDP Socket;
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
char Buffer[128];
std::size_t Received;
sf::IPAddress Sender;
unsigned short Port;
if (Socket.Receive(Buffer, sizeof(Buffer), Received, Sender, Port) != sf::Socket::Done)
{
// Error...
}
// Show the address / port of the sender
std::cout << Buffer << std::endl;
Socket.Close();
}
}
int main()
{
sf::IPAddress client[2];
int connected = 0;
while(connected < 2){
// Create the UDP socket
sf::SocketUDP Socket;
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
char Buffer[128];
std::size_t Received;
sf::IPAddress Sender;
unsigned short Port;
if (Socket.Receive(Buffer, sizeof(Buffer), Received, Sender, Port) != sf::Socket::Done)
{
// Error...
}
// Show the address / port of the sender
client[connected] = Sender;
Socket.Close();
sf::Thread* send = new sf::Thread(&sendInfo, &client[connected]);
sf::Thread* receive = new sf::Thread(&receiveInfo, &client[connected]);
// Start it !
send->Launch();
receive->Launch();
connected++;
}
while(true){
}
return EXIT_SUCCESS;
}
Client:
#include <SFML/System.hpp>
#include <SFML/Network.hpp>
#include <iostream>
void sendInfo(void *UserData)
{
// Print something...
while(true){
// Create the UDP socket
sf::SocketUDP Socket;
// Create bytes to send
char Buffer[] = "client sending info.";
// Send data to "192.168.0.2" on port 4567
if (Socket.Send(Buffer, sizeof(Buffer), "127.0.0.1", 4444) != sf::Socket::Done)
{
// Error...
}
}
}
void receiveInfo(void *userData)
{
// Print something...
while(true){
// Create the UDP socket
sf::SocketUDP Socket;
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
char Buffer[128];
std::size_t Received;
sf::IPAddress Sender;
unsigned short Port;
if (Socket.Receive(Buffer, sizeof(Buffer), Received, Sender, Port) != sf::Socket::Done)
{
// Error...
}
// Show the address / port of the sender
std::cout << Buffer << std::endl;
Socket.Close();
}
}
int main()
{
// Create the UDP socket
sf::SocketUDP Socket;
// Create bytes to send
char Buffer[] = "Client Joined.";
// Send data to "192.168.0.2" on port 4567
if (Socket.Send(Buffer, sizeof(Buffer), "127.0.0.1", 4444) != sf::Socket::Done)
{
// Error...
}
sf::Thread* send = new sf::Thread(&sendInfo);
sf::Thread* receive = new sf::Thread(&receiveInfo);
// Start it !
send->Launch();
receive->Launch();
while(true){
}
return EXIT_SUCCESS;
}
First things first: is this a chat server or a 'more typical' server?
If this is a chat server, then you either need to have a list of sockets that are connected to the clients (you can connect UDP sockets using the connect() call, very convenient, and it also helps reduce the chances of spoofed peers) or a list of all the client addresses that you can supply to sendto() or sendmsg().
More 'typical' servers won't try to send messages to any client except the one that most recently made a request: those servers typically don't save anything from the clients, and instead will use recvfrom() or recvmsg() to get the peer's address for use in later sendto() or sendmsg() calls.
Also, most protocols only rely on one well known port; the server uses one specific port by convention, but clients select whatever port is open and free. FTP relies heavily on well-known ports on the client-side as well, and as a result is a gigantic pain to tunnel through Network Address Translation firewalls.
It isn't just academic: both your client and your server are attempting to bind() to port 4444. This means you need at least two IP addresses on a single machine to test, or use virtualization software to run an entirely separate machine on the same hardware, or just have two machines available. It's more work than it needs to be, and there's no reason for the clients to care about their local port numbers:
Server:
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
Client:
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
Poof! These two will never run on the same host without significant tricks. I expect your "it connects to one" is probably just the server or the client connecting to itself, but without some code to fill in those // Error blocks, it'd be tough to tell for sure.
(And while we're here, I'd like to take an aside to talk about comments; comments that simply re-state what the code does aren't very useful. You'll note that most of your comments are in fact wrong, referring to wrong IPs or ports. Some just don't add any information:
// Create the UDP socket
sf::SocketUDP Socket;
I know we're taught to add comments, but sadly we're not always taught what kind of comments to add. The only comment in both programs that I'd recommend even keeping would be this one, slightly amended:
// udp doesn't require listen or accept
if (!Socket.Bind(4444))
It isn't obvious from reading the code, and won't be wrong when the port number is read out of an environment variable, command line parameter, configuration file, or registry. (It might be too redundant in a team of people familiar with the sockets API, but might be gold for a programmer not that familiar with the differences between UDP and TCP.)
Good function names, variable names, etc., will win over comments almost every time. End of aside. :)
And now, the more minor nit-picking: your thread handlers are doing some tasks like this:
while(1) {
socket s;
bind s;
r = recv s;
print r;
close s;
}
This needless creation, binding, and closing, is all wasted energy, both the computer's energy and (much more importantly) your energy. Consider the following two re-writings:
recv_thread() {
socket s;
bind s;
while (1) {
r = recv s;
print r;
}
close s;
}
or
recv_thread(s) {
while (1) {
r = recv s;
print r;
}
}
/* ... */
socket s;
bind s;
sf::Thread* rt = new sf::Thread(&recv_thread);
rt->Launch(s);
The first option is a simple refactoring of your existing code; it keeps the socket creation and destruction in the thread function, but moves the loop invariants out of the loop. The code inside the loop now does only what is necessary.
The second option is a more drastic reworking: it moves the socket creation to the main thread, where error-handling is probably much easier, and the thread function only does exactly what the remote peer needs that thread to do. (If you wanted to change from UDP to TCP, the second option would be by far the much easier easier one to change -- your threading code might not need any modifications at all.)
I hope this helps. :)
Related
I need some help with a socket program with multiple clients and one server. To simplify, I create
3 socket clients
1 socket server
For each client, it opens a new connection for sending a new message and closes the connection after a response is received.
For the server, it does not need to deal with connections concurrently, it can deal with the message one by one
here is my code (runnable), compile it with /usr/bin/g++ mycode.cpp -g -lpthread -lrt -Wall -o mycode
#include <iostream>
#include <arpa/inet.h>
#include <string.h>
#include <sys/socket.h>
#include <unistd.h>
#include <unordered_map>
#include <thread>
using namespace std;
void Warning(string msg) { std::cout<< msg << std::endl; }
namespace mySocket {
class Memcached {
public:
// start a server
static void controller(int port=7111) { std::thread (server, port).detach(); }
// open a new connection to send a message:
// 1. open a connection
// 2. send the message
// 3. read the message
// 4. close the connection
std::string sendMessage(string msg, string host, int port=7111) {
int sock = 0, client_fd;
struct sockaddr_in serv_addr;
char buffer[1024] = { 0 };
if ((sock = socket(AF_INET, SOCK_STREAM, 0)) < 0) {
std::cout << "Socket creation error, msg: " << msg << ", host: " << host << ", port: " << port << std::endl;
exit(1);
}
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(port);
if (inet_pton(AF_INET, host.c_str(), &serv_addr.sin_addr) <= 0) {
std::cout << "\nInvalid address/ Address not supported, kmsgey: " << msg << ", host: " << host << ", port: " << port << std::endl;
exit(1);
}
while ((client_fd = connect(sock, (struct sockaddr*)&serv_addr, sizeof(serv_addr))) < 0) { sleep(10*1000); }
std::cout << "client sends a message:"<<msg<<", msg size:"<<msg.size()<<std::endl;
send(sock, msg.c_str(), msg.size(), 0);
read(sock, buffer, 1024);
close(client_fd);
return std::string(buffer, strlen(buffer));
}
private:
// start a server
// 1. open a file descriptor
// 2. listen the fd with queue size 10
// 3. accept one connection at a time
// 4. deal with message in the connection
// 5. accept the next connection
// 6. repeat step 3
static void server(int port) {
int server_fd, new_socket;
struct sockaddr_in address;
int opt = 1;
int addrlen = sizeof(address);
char buffer[1024] = { 0 };
unordered_map<string,string> data;
if ((server_fd = socket(AF_INET, SOCK_STREAM, 0)) == 0) {
Warning("socket failed"); exit(1);
}
if (setsockopt(server_fd, SOL_SOCKET, SO_REUSEADDR | SO_REUSEPORT, &opt, sizeof(opt))) {
Warning("setsockopt failed"); exit(1);
}
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(port);
if (bind(server_fd, (struct sockaddr*)&address, sizeof(address)) < 0) {
Warning("bind failed"); exit(1);
}
// the queue size is 10 > 3
if (listen(server_fd, 10) < 0) {
Warning("listen failed"); exit(1);
}
while(1)
{
if ((new_socket = accept(server_fd, (struct sockaddr*)&address, (socklen_t*)&addrlen)) < 0) {
std::cout << "accept failed"; exit(1);
}
memset(&buffer, 0, sizeof(buffer)); //clear the buffer
read(new_socket, buffer, 1024);
std::string msg = std::string(buffer, strlen(buffer));
if (msg.size()==0) {
std::cout<<"I can't believe it"<<std::endl;
}
std::cout<<"received msg from the client:"<<msg<<",msg size:"<<msg.size()<<std::endl;
std::string results="response from the server:["+msg+"]";
send(new_socket, results.c_str(), results.length(), 0);
//usleep(10*1000);
}
if (close(new_socket)<0){
std::cout <<"close error"<<std::endl;
}
shutdown(server_fd, SHUT_RDWR);
}
} ;
}
void operation(int client_id) {
auto obj = new mySocket::Memcached();
for (int i=0; i<10;i++){
int id=client_id*100+i;
std::cout<<obj->sendMessage(std::to_string(id), "127.0.0.1", 7111)<<std::endl<<std::endl;
}
}
int main(int argc, char const* argv[]) {
// start a socket server
mySocket::Memcached::controller();
// start 3 socket clients
std::thread t1(operation, 1);
std::thread t2(operation, 2);
std::thread t3(operation, 3);
t1.join();
t2.join();
t3.join();
}
In the code above, the client always sends a message with a length of 3. However, the server can receive messages with a length of 0 which causes further errors.
I'm struggling with this for several days and can't figure out why it happens. I noticed
if I add a short sleep inside the server while loop, the problem is solved. (uncomment usleep(10*1000); in the code).
or if I only use one client, the problem is also solved.
Any thought helps.
You are using TCP sockets. You may want to use some application-level protocol like HTTP, websockets instead, that will be much easier, because you will not need to worry about how message is sent/received and in which sequence. If you have to stick with TCP sockets, you firstly have to understand few things:
There's two types of TCP sockets you can use: non-blocking and blocking IO (input/output). You are currently using blocking IO. That IO will be sometimes blocked and you won't be able to do anything with sockets. In blocking IO, it can be work arounded by using one socket per thread on server-side. It's not efficient, but it's relatively easy comparing to Non-blocking IO. Non-blocking IO doesn't wait for anything. While in blocking IO you wait for data, in non-blocking IO you create something like events, callbacks, that are used when there's some data. You probably have to read about these types of IO.
In your server function, would be better, if you listen for incoming connections in one thread, and when there's incoming connection, move this connection into another thread and function, that will handle other things. This may solve your problem related to multiple clients at the same time.
In function operation, instead of allocating memory using raw pointer, use static allocation or smart pointers to avoid memory leaks. If you don't want to, then at least, do delete obj; in the end of function.
And the last one thing. You can use some TCP socket wrapper like sockpp to make things a lot easier. You will have anything TCP sockets have, but in C++ style and a little bit easier to understand and maintain. If you can't use application-level protocol, I strongly suggest you to use some wrapper at least.
Update
As was stated by commenters, there are more things you need to know:
TCP sockets are streams. This means that if you send your message with length of 1024 bytes, it can be divided into several TCP data packets and you can't know if it will be divided or not, how much packets other side will receive etc. You have to read in a while loop using recv() and wait for data. There's some tricks which can help you to properly receive data:
You can send length of your message first, so other side will know how much bytes it needs to receive.
You can place some terminating symbol or sequence of terminating symbols in the end of your message and read until these will be received. This can be a little risky, because there's chance that you would not receive these symbols at all and will be reading next.
You have to join client threads only when you know, that server is already started and listening for incoming connections. You can use some variable as a flag for these purposes, but make note, that you have to pay a lot of attention, when reading/writing variable from two or more different threads. For these purposes, you can use mutexes, which are some mechanism that will allow you safely access one variable from several threads.
I'm struggling with an issue for hours:
I want to connect an boost asio udo socket to an endpoint. There is no problem doing this in IPv4. But if I try to do the same in IPv6, I get an error-code "invalid argument".
using boost::asio::ip::udp;
struct UdpConnectionParams
{
udp::endpoint m_localEndpoint;
udp::endpoint m_remoteEndpoint;
}
boost::system::error_code setupUdpConnection(udp::socket& p_socket, const UdpConnectionParams& p_params)
{
// close socket
boost::system::error_code h_ignoreError;
p_socket.close(h_ignoreError);
// variables for kind of UDP connection
udp h_protocol(udp::v4());
bool h_shallBind{false};
bool h_shallConnect{false};
// determine kind of connection
if(p_params.m_localEndpoint != udp::endpoint())
{
h_protocol = p_params.m_localEndpoint.protocol();
h_shallBind = true;
}
if(p_params.m_remoteEndpoint != udp::endpoint())
{
h_protocol = p_params.m_remoteEndpoint.protocol();
h_shallConnect = true;
}
if(!h_shallBind && !h_shallConnect)
{
// no endpoint specified, return error
return boost::system::error_code(ENetworkErrorCode::NO_ENDPOINT_SPECIFIED, NetworkErrorCategory::getCategory());
}
try
{
p_socket.open(h_protocol);
//bind socket to certain endpoint
if(h_shallBind)
{
p_socket.bind(p_params.m_localEndpoint);
}
//connect socket to client. Thus it is possible to use p_socket.send()
if(h_shallConnect)
{
p_socket.connect(p_params.m_remoteEndpoint);
}
}
catch (boost::system::system_error& h_error)
{
p_socket.close(h_ignoreError);
return h_error.code();
}
// no error
return boost::system::error_code();
}
int main()
{
boost::asio::io_service service;
udp::socket socket(service);
boost::system::error_code error;
UdpConnectionParams params;
params.m_localEndpoint = udp::endpoint(udp::v6(), 55555);
params.m_remoteEndpoint = udp::endpoint(boost::asio::ip::address_v6::from_string("ff01::101"), 55555);
error = setupUdpConnection(socket, params);
cout << error << error.message() << endl; // "invalid argument"
return 0;
}
The only way I get no error, is with localhost IP address (::1). There is no difference if I bind the socket to an endpoint.
What am I doing wrong?
What am I doing wrong?
The problem is that you don't specify an interface index/scope in the IPv6 address you are using. IPv6 multicast address require a scope to be specified, so that the network stack will know which of your computer's local network interfaces to associate the IP address with.
i.e. instead of:
boost::asio::ip::address_v6::from_string("ff01::101"), 55555);
you need something like:
boost::asio::ip::address_v6::from_string("ff01::101%eth0"), 55555);
(The suffix after the % symbol will depend on the name of the network interface you want to use, of course)
(As a side note, the "ff01::" prefix is for node-local IPv6 multicast groups, which means that your UDP packets will only go to other programs running on the same computer. If that's what you intended, then great; on the other hand, if you wanted your UDP packets to reach other computers on the same LAN, you'll want to use a "ff02::" or "ff12::" prefix instead (ff02:: would be for a well-known multicast address, ff12:: would be for a transient multicast address). See the "Multicast address scope" table on the Wikipedia page for details)
I have been trying to figure out this problem for over a month now. I have no where else to turn.
I have a server that listens to many multicast channels (100ish). Each socket is its own thread. Then I have a client listener (single threaded) that handles all incoming connections, disconnects, and client messaging within the same server. The idea is that a client comes in, connects, requests data from a multicast channels and I send the data back to the client. The client stays connected and I relay the UDP data back to the client. The client can either request UDP or TCP has the protocol for the data relay. At one point this was working beautifully for a couple of weeks. We did some code and kernel changes, and now we cant figure out whats gone wrong.
The server will run for hours and have hundreds of clients connected throughout the day. But at some point, randomly, the server will just stop. And by stop, I mean: all UDP sockets stop receiving/handling data (tcpdump shows data still coming to the box), the client_listener thread stops receiving client packets. BUT!!! the main client_listener socket can still receive new connections and new disconnects on the main socket. On a new connection, the main sockets is able to send a "Connection Established" packet back to the client, but then when the client responds, the select never returns.
I can post code if someone would like. If anyone has any suggestions where to look or if this sounds like something. Please let me know.
If you have any questions, please ask.
Thank you.
I would like to share my TCP Server code:
This is a single thread. Works fine for hours and then I will only receive "New Connections" and "Disconnects". NO CLIENT PACKETS WILL COME IN.
int opt = 1;
int addrlen;
int sd;
int max_sd;
int valread;
int activity;
int new_socket;
char buffer[MAX_BUFFER_SIZE];
int client_socket[m_max_clients];
struct sockaddr_in address;
fd_set readfds;
for(int i = 0; i<m_max_clients; i++)
{
client_socket[i]=0;
}
if((m_master_socket = socket(AF_INET,SOCK_STREAM,0))==0)
LOG(FATAL)<<"Unable to create master socket";
if(setsockopt(m_master_socket,SOL_SOCKET,SO_REUSEADDR,(char*)&opt,sizeof(opt))<0)
LOG(FATAL)<<"Unable to set master socket";
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(m_listenPort);
if(bind(m_master_socket,(struct sockaddr*)& address, sizeof(address))!=0)
LOG(FATAL)<<"Unable to bind master socket";
if(listen(m_master_socket,SOMAXCONN)!=0)
LOG(FATAL)<<"listen() failed with err";
addrlen = sizeof(address);
LOG(INFO)<<"Waiting for connections......";
while(true)
{
FD_ZERO(&readfds);
FD_SET(m_master_socket, &readfds);
max_sd = m_master_socket;
for(int i = 0; i<m_max_clients; i++)
{
sd = client_socket[i];
if(sd > 0)
FD_SET(sd, &readfds);
if(sd>max_sd)
max_sd = sd;
}
activity = select(max_sd+1,&readfds,NULL,NULL,NULL);
if((activity<0)&&(errno!=EINTR))
{
// int err = errno;
// LOG(ERROR)<<"SELECT ERROR:"<<activity<<" "<<err;
continue;
}
if(FD_ISSET(m_master_socket, &readfds))
{
if((new_socket = accept(m_master_socket,(struct sockaddr*)&address, (socklen_t*)&addrlen))<0)
LOG(FATAL)<<"ERROR:ACCEPT FAILED!";
LOG(INFO)<<"New Connection, socket fd is (" << new_socket << ") client_addr:" << inet_ntoa(address.sin_addr) << " Port:" << ntohs(address.sin_port);
for(int i =0;i<m_max_clients;i++)
{
if(client_socket[i]==0)
{
//try to set the socket to non blocking, tcp nagle and keep alive
if ( !SetSocketBlockingEnabled(new_socket, false) )
LOG(INFO)<<"UNABLE TO SET NON-BLOCK: ("<<new_socket<<")" ;
if ( !SetSocketNoDelay(new_socket,false) )
LOG(INFO)<<"UNABLE TO SET DELAY: ("<<new_socket<<")" ;
// if ( !SetSocketKeepAlive(new_socket,true) )
// LOG(INFO)<<"UNABLE TO SET KeepAlive: ("<<new_socket<<")" ;
ClientConnection* con = new ClientConnection(m_mocSrv, m_udpPortGenerator, inet_ntoa(address.sin_addr), ntohs(address.sin_port), new_socket);
if(con->login())
{
client_socket[i] = new_socket;
m_clientConnectionSocketMap[new_socket] = con;
LOG(INFO)<<"Client Connection Logon Complete";
}
else
delete con;
break;
}
}//for
}
else
{
try{
for(int i = 0; i<m_max_clients; i++)
{
sd = client_socket[i];
if(FD_ISSET(sd,&readfds))
{
if ( (valread = recv(sd, buffer, sizeof(buffer),MSG_DONTWAIT|MSG_NOSIGNAL)) <= 0 )
{
//remove from the fd listening set
LOG(INFO)<<"RESET CLIENT_SOCKET:("<<sd<<")";
client_socket[i]=0;
handleDisconnect(sd,true);
}
else
{
std::map<int, ClientConnection*>::iterator client_connection_socket_iter = m_clientConnectionSocketMap.find(sd);
if(client_connection_socket_iter != m_clientConnectionSocketMap.end())
{
client_connection_socket_iter->second->handle_message(buffer, valread);
if(client_connection_socket_iter->second->m_logoff)
{
LOG(INFO)<<"SOCKET LOGGED OFF:"<<sd;
client_socket[i]=0;
handleDisconnect(sd,true);
}
}
else
{
LOG(ERROR)<<"UNABLE TO FIND SOCKET DESCRIPTOR:"<<sd;
}
}
}
}
}catch(...)
{
LOG(ERROR)<<"EXCEPTION CATCH!!!";
}
}
}
From the information given I would state the following:
Do not use a thread for each connection. Since you're on Linux use EPOLL Edge Triggered Multiplexing. Most newer web frameworks use this technology. For more info check 10K Problem.
By eliminating threads from the equation you're eliminating the possibilities of a deadlock and reducing the complexity of debugging / worrying about thread safe variables.
Ensure each connection when finished is completely closed.
Ensure that you do not have some new firewall rules that popped up in iptables since the upgrade.
Check any firewalls on the network to see if they are restricting certain types of activity (is your server on a new IP since the upgrade?)
In short I would put my money on a thread deadlock and / or starvation. I've personally conducted experiments in which I've created a multithreaded server vs a single threaded server using Epoll. The results where night and day, Epoll blows away multithreaded implementation (for I/O) and makes the code simpler to write, debug and maintain.
I am learning multicast programming with socket.h and boost::asio. I am reviewing this link here, and they offer the following code using socket.h to implement a multicast server.
#include <sys/types.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <stdio.h>
#include <stdlib.h>
struct in_addr localInterface;
struct sockaddr_in groupSock;
int sd;
char databuf[1024] = "Multicast test message lol!";
int datalen = sizeof(databuf);
int main (int argc, char *argv[ ])
{
/* Create a datagram socket on which to send. */
sd = socket(AF_INET, SOCK_DGRAM, 0);
if(sd < 0)
{
perror("Opening datagram socket error");
exit(1);
}
else
printf("Opening the datagram socket...OK.\n");
/* Initialize the group sockaddr structure with a */
/* group address of 225.1.1.1 and port 5555. */
memset((char *) &groupSock, 0, sizeof(groupSock));
groupSock.sin_family = AF_INET;
groupSock.sin_addr.s_addr = inet_addr("226.1.1.1");
groupSock.sin_port = htons(4321);
/* Disable loopback so you do not receive your own datagrams.
{
char loopch = 0;
if(setsockopt(sd, IPPROTO_IP, IP_MULTICAST_LOOP, (char *)&loopch, sizeof(loopch)) < 0)
{
perror("Setting IP_MULTICAST_LOOP error");
close(sd);
exit(1);
}
else
printf("Disabling the loopback...OK.\n");
}
*/
/* Set local interface for outbound multicast datagrams. */
/* The IP address specified must be associated with a local, */
/* multicast capable interface. */
localInterface.s_addr = inet_addr("203.106.93.94");
if(setsockopt(sd, IPPROTO_IP, IP_MULTICAST_IF, (char *)&localInterface, sizeof(localInterface)) < 0)
{
perror("Setting local interface error");
exit(1);
}
else
printf("Setting the local interface...OK\n");
/* Send a message to the multicast group specified by the*/
/* groupSock sockaddr structure. */
/*int datalen = 1024;*/
if(sendto(sd, databuf, datalen, 0, (struct sockaddr*)&groupSock, sizeof(groupSock)) < 0)
{perror("Sending datagram message error");}
else
printf("Sending datagram message...OK\n");
/* Try the re-read from the socket if the loopback is not disable
if(read(sd, databuf, datalen) < 0)
{
perror("Reading datagram message error\n");
close(sd);
exit(1);
}
else
{
printf("Reading datagram message from client...OK\n");
printf("The message is: %s\n", databuf);
}
*/
return 0;
}
I am also reviewing an example of how to implement a multicast server using boost::asio here, and they present the following code.
//
// sender.cpp
// ~~~~~~~~~~
//
// Copyright (c) 2003-2010 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
#include <iostream>
#include <sstream>
#include <string>
#include <boost/asio.hpp>
#include "boost/bind.hpp"
#include "boost/date_time/posix_time/posix_time_types.hpp"
const short multicast_port = 30001;
const int max_message_count = 10;
class sender
{
public:
sender(boost::asio::io_service& io_service,
const boost::asio::ip::address& multicast_address)
: endpoint_(multicast_address, multicast_port),
socket_(io_service, endpoint_.protocol()),
timer_(io_service),
message_count_(0)
{
std::ostringstream os;
os << "Message " << message_count_++;
message_ = os.str();
socket_.async_send_to(
boost::asio::buffer(message_), endpoint_,
boost::bind(&sender::handle_send_to, this,
boost::asio::placeholders::error));
}
void handle_send_to(const boost::system::error_code& error)
{
if (!error && message_count_ < max_message_count)
{
timer_.expires_from_now(boost::posix_time::seconds(1));
timer_.async_wait(
boost::bind(&sender::handle_timeout, this,
boost::asio::placeholders::error));
}
}
void handle_timeout(const boost::system::error_code& error)
{
if (!error)
{
std::ostringstream os;
os << "Message " << message_count_++;
message_ = os.str();
socket_.async_send_to(
boost::asio::buffer(message_), endpoint_,
boost::bind(&sender::handle_send_to, this,
boost::asio::placeholders::error));
}
}
private:
boost::asio::ip::udp::endpoint endpoint_;
boost::asio::ip::udp::socket socket_;
boost::asio::deadline_timer timer_;
int message_count_;
std::string message_;
};
int main(int argc, char* argv[])
{
try
{
if (argc != 2)
{
std::cerr << "Usage: sender <multicast_address>\n";
std::cerr << " For IPv4, try:\n";
std::cerr << " sender 239.255.0.1\n";
std::cerr << " For IPv6, try:\n";
std::cerr << " sender ff31::8000:1234\n";
return 1;
}
boost::asio::io_service io_service;
sender s(io_service, boost::asio::ip::address::from_string(argv[1]));
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
I noticed the example using socket.h defines both a local interface and multicast addresses. However, the example using boost::asio only defines a multicast address. I will not include the code for the sake of brevity, but I noticed the code to implement a multicast receiver with both socket.h and boost::asio define both local interface and multicast addresses. But why do I not need to define a local interface address using boost::asio to implement a multicast server? Also, is boost::asio or socket.h faster if I want to send and receive multicast messages every few milliseconds?
When using a multicast, one only needs to set the IP_MULTICAST_IF option when datagrams should egress a specific interface. Boost.Asio provides this option with ip::multicast::outbound_interface. When this option is not used, multicast transmissions are sent from the default interface, and the kernel may perform routing and forwarding through other interfaces. For instance, consider the case where a server has two NIC cards, connecting it to a LAN and a WAN. If the WAN is the default interface and multicast datagrams are to be sent to the LAN, then for a given socket, one could use the socket option to specify the outbound interface as the LAN.
Often times, the sender rarely cares about the exact endpoint (address and port) to which the socket binds. In both sender examples, the sender creates a socket, and defers to the kernel to bind to an endpoint. In the first example, the multicast messages sent from the local socket will egress the interface that has been assigned 203.106.93.94 address.
On the other hand, the receiver often cares about binding to a specific port. The receiver will bind the local socket to any appropriate address or defer to the kernel, and bind to the port matching the multicast endpoint's port. Once bound, a receiver will then have the socket join the multicast group, at which point the socket can begin receiving multicast datagrams. Note that for a given system, if multiple applications are interested in receiving the multicast datagram, then one should use the reuse_address socket option.
Using the Boost.Asio examples as a reference, if one launches the sender with ./sender 239.255.0.1 and multiple receivers with ./receiver 0.0.0.0 239.255.0.1, then the following sockets and binds occur:
.----------.
.----------.|
.--------. address: any address: any .----------.||
| | port: any / \ port: 30001 | |||
| sender |-( ----------->| address: 239.255.0.1 |----------> )-| receiver ||'
| | \ port: 30001 / | |'
'--------' '----------'
The sender binds to any address and port. For instance, lets say the kernel binds it to port 24000.
The receiver binds to any address and port 30001 and joins the 239.255.0.1 multicast group.
The sender writes messages to 239.255.0.1:30001.
The receiver receives messages sent to 239.255.0.1:30001. The receiver's receive_from() operation's sender_endpoint argument will be populated with the sender's endpoint address and port 24000.
As far as performance goes, profiling the application would provide a definitive answer. The examples provided in the question are very different (synchronous vs. asynchronous), so directly comparing the two to determine which is faster may not be appropriate. In general, Boost.Asio will provide some overhead due to its abstractions. However, I have yet to work on an application where Boost.Asio's overhead was the problem, and its abstractions have saved me countless development and maintenance man-hours.
I apologize in advance if the question has been previously answered, but I've searched and found nothing that helps me. As indicated by the question's title, I'm trying to broadcast a package from a server to a set of clients listening for any message.
The client will count the number of messages it receives during one second.
The server side of things goes like this:
class Server
{
public:
Server(boost::asio::io_service& io)
: socket(io, udp::endpoint(udp::v4(), 8888))
, broadcastEndpoint(address_v4::broadcast(), 8888)
, tickHandler(boost::bind(&Server::Tick, this, boost::asio::placeholders::error))
, timer(io, boost::posix_time::milliseconds(20))
{
socket.set_option(boost::asio::socket_base::reuse_address(true));
socket.set_option(boost::asio::socket_base::broadcast(true));
timer.async_wait(tickHandler);
}
private:
void Tick(const boost::system::error_code&)
{
socket.send_to(boost::asio::buffer(buffer), broadcastEndpoint);
timer.expires_at(timer.expires_at() + boost::posix_time::milliseconds(20));
timer.async_wait(tickHandler);
}
private:
udp::socket socket;
udp::endpoint broadcastEndpoint;
boost::function<void(const boost::system::error_code&)> tickHandler;
boost::asio::deadline_timer timer;
boost::array<char, 100> buffer;
};
It is initialized and run in the following way:
int main()
{
try
{
boost::asio::io_service io;
Server server(io);
io.run();
}
catch (const std::exception& e)
{
std::cerr << e.what() << "\n";
}
return 0;
}
This (apparently) works fine. Now comes the client...
void HandleReceive(const boost::system::error_code&, std::size_t bytes)
{
std::cout << "Got " << bytes << " bytes\n";
}
int main(int argc, char* argv[])
{
if (argc != 2)
{
std::cerr << "Usage: " << argv[0] << " <host>\n";
return 1;
}
try
{
boost::asio::io_service io;
udp::resolver resolver(io);
udp::resolver::query query(udp::v4(), argv[1], "1666");
udp::endpoint serverEndpoint = *resolver.resolve(query);
//std::cout << serverEndpoint.address() << "\n";
udp::socket socket(io);
socket.open(udp::v4());
socket.bind(serverEndpoint);
udp::endpoint senderEndpoint;
boost::array<char, 300> buffer;
auto counter = 0;
auto start = std::chrono::system_clock::now();
while (true)
{
socket.receive_from(boost::asio::buffer(buffer), senderEndpoint);
++counter;
auto current = std::chrono::system_clock::now();
if (current - start >= std::chrono::seconds(1))
{
std::cout << counter << "\n";
counter = 0;
start = current;
}
}
}
catch (const std::exception& e)
{
std::cerr << e.what() << "\n";
}
This works when running both the server and client on the same machine, but doesn't when I run the server on a machine different from that of where I run the client.
First thing is, it seems odd to me that I have to resolve the server's address. Perhaps I don't know how broadcasting really works, but I thought the server would send a message using its socket with the broadcast option turned on, and it would arrive to all the sockets in the same network.
I read you should bind the client's socket to the address_v4::any() address. I did, it doesn't work (says something about a socket already using the address/port).
Thanks in advance.
PS: I'm under Windows 8.
I am a bit surprised this works on the same machine. I would not have expected the client, listening to port 1666, to receive data being sent to the broadcast address on port 8888.
bind() assigns a local endpoint (composed of a local address and port) to the socket. When a socket binds to an endpoint, it specifies that the socket will only receive data sent to the bound address and port. It is often advised to bind to address_v4::any(), as this will use all available interfaces for listening. In the case of a system with multiple interfaces (possible multiple NIC cards), binding to a specific interface address will result in the socket only listening to data received from the specified interface[1]. Thus, one might find themselves obtaining an address through resolve() when the application wants to bind to a specific network interface and wants to support resolving it by providing the IP directly (127.0.0.1) or a name (localhost).
It is important to note that when binding to a socket, the endpoint is composed of both an address and port. This is the source of my surprise that it works on the same machine. If the server is writing to broadcast:8888, a socket bound to port 1666 should not receive the datagram. Nevertheless, here is a visual of the endpoints and networking:
.--------.
.--------.|
.--------. address: any address: any .--------.||
| | port: any / \ port: 8888 | |||
| server |-( ----------->| address: broadcast |----------> )-| client ||'
| | \ port: 8888 / | |'
'--------' '--------'
The server binds to any address and any port, enables the broadcast option, and sends data to the remote endpoint (broadcast:8888). Clients bound to the any address on port 8888 should receive the data.
A simple example is as follows.
The server:
#include <boost/asio.hpp>
int main()
{
namespace ip = boost::asio::ip;
boost::asio::io_service io_service;
// Server binds to any address and any port.
ip::udp::socket socket(io_service,
ip::udp::endpoint(ip::udp::v4(), 0));
socket.set_option(boost::asio::socket_base::broadcast(true));
// Broadcast will go to port 8888.
ip::udp::endpoint broadcast_endpoint(ip::address_v4::broadcast(), 8888);
// Broadcast data.
boost::array<char, 4> buffer;
socket.send_to(boost::asio::buffer(buffer), broadcast_endpoint);
}
The client:
#include <iostream>
#include <boost/asio.hpp>
int main()
{
namespace ip = boost::asio::ip;
boost::asio::io_service io_service;
// Client binds to any address on port 8888 (the same port on which
// broadcast data is sent from server).
ip::udp::socket socket(io_service,
ip::udp::endpoint(ip::udp::v4(), 8888 ));
ip::udp::endpoint sender_endpoint;
// Receive data.
boost::array<char, 4> buffer;
std::size_t bytes_transferred =
socket.receive_from(boost::asio::buffer(buffer), sender_endpoint);
std::cout << "got " << bytes_transferred << " bytes." << std::endl;
}
When the client is not co-located with the server, then it could be a variety of network related issues:
Verify connectivity between the server and client.
Verify firewall exceptions.
Verify broadcast support/exceptions on the routing device.
Use a network analyzer tool, such as Wireshark, to verify that the time to live field in the packets is high enough that it will not be discarded during routing.
1. On Linux, broadcast datagrams received by an adapter will not be passed to a socket bound to a specific interface, as the datagram's destination is set to the broadcast address. On the other hand, Windows will pass broadcast datagrams received by an adapter to sockets bound to a specific interface.