invalid argument on boost asio udp socket connect (IPv6) - c++

I'm struggling with an issue for hours:
I want to connect an boost asio udo socket to an endpoint. There is no problem doing this in IPv4. But if I try to do the same in IPv6, I get an error-code "invalid argument".
using boost::asio::ip::udp;
struct UdpConnectionParams
{
udp::endpoint m_localEndpoint;
udp::endpoint m_remoteEndpoint;
}
boost::system::error_code setupUdpConnection(udp::socket& p_socket, const UdpConnectionParams& p_params)
{
// close socket
boost::system::error_code h_ignoreError;
p_socket.close(h_ignoreError);
// variables for kind of UDP connection
udp h_protocol(udp::v4());
bool h_shallBind{false};
bool h_shallConnect{false};
// determine kind of connection
if(p_params.m_localEndpoint != udp::endpoint())
{
h_protocol = p_params.m_localEndpoint.protocol();
h_shallBind = true;
}
if(p_params.m_remoteEndpoint != udp::endpoint())
{
h_protocol = p_params.m_remoteEndpoint.protocol();
h_shallConnect = true;
}
if(!h_shallBind && !h_shallConnect)
{
// no endpoint specified, return error
return boost::system::error_code(ENetworkErrorCode::NO_ENDPOINT_SPECIFIED, NetworkErrorCategory::getCategory());
}
try
{
p_socket.open(h_protocol);
//bind socket to certain endpoint
if(h_shallBind)
{
p_socket.bind(p_params.m_localEndpoint);
}
//connect socket to client. Thus it is possible to use p_socket.send()
if(h_shallConnect)
{
p_socket.connect(p_params.m_remoteEndpoint);
}
}
catch (boost::system::system_error& h_error)
{
p_socket.close(h_ignoreError);
return h_error.code();
}
// no error
return boost::system::error_code();
}
int main()
{
boost::asio::io_service service;
udp::socket socket(service);
boost::system::error_code error;
UdpConnectionParams params;
params.m_localEndpoint = udp::endpoint(udp::v6(), 55555);
params.m_remoteEndpoint = udp::endpoint(boost::asio::ip::address_v6::from_string("ff01::101"), 55555);
error = setupUdpConnection(socket, params);
cout << error << error.message() << endl; // "invalid argument"
return 0;
}
The only way I get no error, is with localhost IP address (::1). There is no difference if I bind the socket to an endpoint.
What am I doing wrong?

What am I doing wrong?
The problem is that you don't specify an interface index/scope in the IPv6 address you are using. IPv6 multicast address require a scope to be specified, so that the network stack will know which of your computer's local network interfaces to associate the IP address with.
i.e. instead of:
boost::asio::ip::address_v6::from_string("ff01::101"), 55555);
you need something like:
boost::asio::ip::address_v6::from_string("ff01::101%eth0"), 55555);
(The suffix after the % symbol will depend on the name of the network interface you want to use, of course)
(As a side note, the "ff01::" prefix is for node-local IPv6 multicast groups, which means that your UDP packets will only go to other programs running on the same computer. If that's what you intended, then great; on the other hand, if you wanted your UDP packets to reach other computers on the same LAN, you'll want to use a "ff02::" or "ff12::" prefix instead (ff02:: would be for a well-known multicast address, ff12:: would be for a transient multicast address). See the "Multicast address scope" table on the Wikipedia page for details)

Related

boost::asio::ip::tcp::socket - How to bind to a specific local port

I am making a client socket.
To make things easier for my testers, I'd like to specify the network card and port that the socket will use.
Yesterday, in my Google search, I found: Binding boost asio to local tcp endpoint
By performing the open, bind, and async_connect, I was able to bind to a specific network card and I started seeing traffic in Wireshark.
However, Wireshark reports that the socket has been given a random port rather than the one I specified. I would think if the port was in use it would have filled out the error_code passed to the bind method.
What am I doing wrong?
Here is my minimal example, extracted and edited from my real solution.
// Boost Includes
#include <boost/asio.hpp>
#include <boost/atomic.hpp>
#include <boost/bind.hpp>
#include <boost/thread.hpp>
#include <boost/thread/condition_variable.hpp>
// Standard Includes
#include <exception>
#include <memory>
#include <string>
#include <sstream>
boost::asio::io_service g_ioService; /** ASIO sockets require an io_service to run on*/
boost::thread g_thread; /** thread that will run the io_service and hence where callbacks are called*/
boost::asio::ip::tcp::socket g_socket(g_ioService); /** Aync socket*/
boost::asio::ip::tcp::resolver g_resolver(g_ioService); /** Resolves IP Addresses*/
//--------------------------------------------------------------------------------------------------
void OnConnect(const boost::system::error_code & errorCode, boost::asio::ip::tcp::resolver::iterator endpoint)
{
if (errorCode || endpoint == boost::asio::ip::tcp::resolver::iterator())
{
// Error - An error occured while attempting to connect
throw std::runtime_error("An error occured while attempting to connect");
}
// We connected to an endpoint
/*
// Start reading from the socket
auto callback = boost::bind(OnReceive, boost::asio::placeholders::error);
boost::asio::async_read_until(g_socket, m_receiveBuffer, '\n', callback);
*/
}
//--------------------------------------------------------------------------------------------------
void Connect()
{
const std::string hostName = "10.84.0.36";
const unsigned int port = 1007;
// Resolve to translate the server machine name into a list of endpoints
std::ostringstream converter;
converter << port;
const std::string portAsString = converter.str();
boost::asio::ip::tcp::resolver::query query(hostName, portAsString);
boost::system::error_code errorCode;
boost::asio::ip::tcp::resolver::iterator itEnd;
boost::asio::ip::tcp::resolver::iterator itEndpoint = g_resolver.resolve(query, errorCode);
if (errorCode || itEndpoint == itEnd)
{
// Error - Could not resolve either machine
throw std::runtime_error("Could not resolve either machine");
}
g_socket.open(boost::asio::ip::tcp::v4(), errorCode);
if (errorCode)
{
// Could open the g_socket
throw std::runtime_error("Could open the g_socket");
}
boost::asio::ip::tcp::endpoint localEndpoint(boost::asio::ip::address::from_string("10.86.0.18"), 6000);
g_socket.bind(localEndpoint, errorCode);
if (errorCode)
{
// Could bind the g_socket to local endpoint
throw std::runtime_error("Could bind the socket to local endpoint");
}
// Attempt to asynchronously connect using each possible end point until we find one that works
boost::asio::async_connect(g_socket, itEndpoint, boost::bind(OnConnect, boost::asio::placeholders::error, boost::asio::placeholders::iterator));
}
//--------------------------------------------------------------------------------------------------
void g_ioServiceg_threadProc()
{
try
{
// Connect to the server
Connect();
// Run the asynchronous callbacks from the g_socket on this thread
// Until the io_service is stopped from another thread
g_ioService.run();
}
catch (...)
{
throw std::runtime_error("unhandled exception caught from io_service g_thread");
}
}
//--------------------------------------------------------------------------------------------------
int main()
{
// Start up the IO service thread
g_thread.swap(boost::thread(g_ioServiceg_threadProc));
// Hang out awhile
boost::this_thread::sleep_for(boost::chrono::seconds(60));
// Stop the io service and allow the g_thread to exit
// This will cancel any outstanding work on the io_service
g_ioService.stop();
// Join our g_thread
if (g_thread.joinable())
{
g_thread.join();
}
return true;
}
As you can see in the following screenshot, a random port 32781 was selected rather than my requested port 6000.
I doubt topic starter is still interested in this question, but for all of future seekers like myself, here is the solution.
The issue here is that boost::asio::connect closes the socket before calling connect for every endpoint in the provided range:
From boost/asio/impl/connect.hpp:
template <typename Protocol BOOST_ASIO_SVC_TPARAM,
typename Iterator, typename ConnectCondition>
Iterator connect(basic_socket<Protocol BOOST_ASIO_SVC_TARG>& s,
Iterator begin, Iterator end, ConnectCondition connect_condition,
boost::system::error_code& ec)
{
ec = boost::system::error_code();
for (Iterator iter = begin; iter != end; ++iter)
{
iter = (detail::call_connect_condition(connect_condition, ec, iter, end));
if (iter != end)
{
s.close(ec); // <------
s.connect(*iter, ec);
if (!ec)
return iter;
}
...
}
That is why bound address is reset. To keep it bound one can use socket.connect/async_connect(...) directly
6000 is the remote endpoint port, and it is correctly used (otherwise, you wouldn't be connecting to the server side).
From: https://idea.popcount.org/2014-04-03-bind-before-connect/
A TCP/IP connection is identified by a four element tuple: {source IP, source port, destination IP, destination port}. To establish a TCP/IP connection only a destination IP and port number are needed, the operating system automatically selects source IP and port.
Since you do not bind to a local port, one is selected randomly from the "ephemeral port range". This is, by far, the usual way to connect.
Fear not:
It is possible to ask the kernel to select a specific source IP and port by calling bind() before calling connect()
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# Let the source address be 192.168.1.21:1234
s.bind(("192.168.1.21", 1234))
s.connect(("www.google.com", 80))
The sample is python.
You do that, but still get another port. It's likely that the hint port is not available.
Check the information on SO_REUSEADDR and SO_REUSEPORT in the linked article

Differences between boost::asio and socket.h for multicast

I am learning multicast programming with socket.h and boost::asio. I am reviewing this link here, and they offer the following code using socket.h to implement a multicast server.
#include <sys/types.h>
#include <sys/socket.h>
#include <arpa/inet.h>
#include <netinet/in.h>
#include <stdio.h>
#include <stdlib.h>
struct in_addr localInterface;
struct sockaddr_in groupSock;
int sd;
char databuf[1024] = "Multicast test message lol!";
int datalen = sizeof(databuf);
int main (int argc, char *argv[ ])
{
/* Create a datagram socket on which to send. */
sd = socket(AF_INET, SOCK_DGRAM, 0);
if(sd < 0)
{
perror("Opening datagram socket error");
exit(1);
}
else
printf("Opening the datagram socket...OK.\n");
/* Initialize the group sockaddr structure with a */
/* group address of 225.1.1.1 and port 5555. */
memset((char *) &groupSock, 0, sizeof(groupSock));
groupSock.sin_family = AF_INET;
groupSock.sin_addr.s_addr = inet_addr("226.1.1.1");
groupSock.sin_port = htons(4321);
/* Disable loopback so you do not receive your own datagrams.
{
char loopch = 0;
if(setsockopt(sd, IPPROTO_IP, IP_MULTICAST_LOOP, (char *)&loopch, sizeof(loopch)) < 0)
{
perror("Setting IP_MULTICAST_LOOP error");
close(sd);
exit(1);
}
else
printf("Disabling the loopback...OK.\n");
}
*/
/* Set local interface for outbound multicast datagrams. */
/* The IP address specified must be associated with a local, */
/* multicast capable interface. */
localInterface.s_addr = inet_addr("203.106.93.94");
if(setsockopt(sd, IPPROTO_IP, IP_MULTICAST_IF, (char *)&localInterface, sizeof(localInterface)) < 0)
{
perror("Setting local interface error");
exit(1);
}
else
printf("Setting the local interface...OK\n");
/* Send a message to the multicast group specified by the*/
/* groupSock sockaddr structure. */
/*int datalen = 1024;*/
if(sendto(sd, databuf, datalen, 0, (struct sockaddr*)&groupSock, sizeof(groupSock)) < 0)
{perror("Sending datagram message error");}
else
printf("Sending datagram message...OK\n");
/* Try the re-read from the socket if the loopback is not disable
if(read(sd, databuf, datalen) < 0)
{
perror("Reading datagram message error\n");
close(sd);
exit(1);
}
else
{
printf("Reading datagram message from client...OK\n");
printf("The message is: %s\n", databuf);
}
*/
return 0;
}
I am also reviewing an example of how to implement a multicast server using boost::asio here, and they present the following code.
//
// sender.cpp
// ~~~~~~~~~~
//
// Copyright (c) 2003-2010 Christopher M. Kohlhoff (chris at kohlhoff dot com)
//
// Distributed under the Boost Software License, Version 1.0. (See accompanying
// file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
//
#include <iostream>
#include <sstream>
#include <string>
#include <boost/asio.hpp>
#include "boost/bind.hpp"
#include "boost/date_time/posix_time/posix_time_types.hpp"
const short multicast_port = 30001;
const int max_message_count = 10;
class sender
{
public:
sender(boost::asio::io_service& io_service,
const boost::asio::ip::address& multicast_address)
: endpoint_(multicast_address, multicast_port),
socket_(io_service, endpoint_.protocol()),
timer_(io_service),
message_count_(0)
{
std::ostringstream os;
os << "Message " << message_count_++;
message_ = os.str();
socket_.async_send_to(
boost::asio::buffer(message_), endpoint_,
boost::bind(&sender::handle_send_to, this,
boost::asio::placeholders::error));
}
void handle_send_to(const boost::system::error_code& error)
{
if (!error && message_count_ < max_message_count)
{
timer_.expires_from_now(boost::posix_time::seconds(1));
timer_.async_wait(
boost::bind(&sender::handle_timeout, this,
boost::asio::placeholders::error));
}
}
void handle_timeout(const boost::system::error_code& error)
{
if (!error)
{
std::ostringstream os;
os << "Message " << message_count_++;
message_ = os.str();
socket_.async_send_to(
boost::asio::buffer(message_), endpoint_,
boost::bind(&sender::handle_send_to, this,
boost::asio::placeholders::error));
}
}
private:
boost::asio::ip::udp::endpoint endpoint_;
boost::asio::ip::udp::socket socket_;
boost::asio::deadline_timer timer_;
int message_count_;
std::string message_;
};
int main(int argc, char* argv[])
{
try
{
if (argc != 2)
{
std::cerr << "Usage: sender <multicast_address>\n";
std::cerr << " For IPv4, try:\n";
std::cerr << " sender 239.255.0.1\n";
std::cerr << " For IPv6, try:\n";
std::cerr << " sender ff31::8000:1234\n";
return 1;
}
boost::asio::io_service io_service;
sender s(io_service, boost::asio::ip::address::from_string(argv[1]));
io_service.run();
}
catch (std::exception& e)
{
std::cerr << "Exception: " << e.what() << "\n";
}
return 0;
}
I noticed the example using socket.h defines both a local interface and multicast addresses. However, the example using boost::asio only defines a multicast address. I will not include the code for the sake of brevity, but I noticed the code to implement a multicast receiver with both socket.h and boost::asio define both local interface and multicast addresses. But why do I not need to define a local interface address using boost::asio to implement a multicast server? Also, is boost::asio or socket.h faster if I want to send and receive multicast messages every few milliseconds?
When using a multicast, one only needs to set the IP_MULTICAST_IF option when datagrams should egress a specific interface. Boost.Asio provides this option with ip::multicast::outbound_interface. When this option is not used, multicast transmissions are sent from the default interface, and the kernel may perform routing and forwarding through other interfaces. For instance, consider the case where a server has two NIC cards, connecting it to a LAN and a WAN. If the WAN is the default interface and multicast datagrams are to be sent to the LAN, then for a given socket, one could use the socket option to specify the outbound interface as the LAN.
Often times, the sender rarely cares about the exact endpoint (address and port) to which the socket binds. In both sender examples, the sender creates a socket, and defers to the kernel to bind to an endpoint. In the first example, the multicast messages sent from the local socket will egress the interface that has been assigned 203.106.93.94 address.
On the other hand, the receiver often cares about binding to a specific port. The receiver will bind the local socket to any appropriate address or defer to the kernel, and bind to the port matching the multicast endpoint's port. Once bound, a receiver will then have the socket join the multicast group, at which point the socket can begin receiving multicast datagrams. Note that for a given system, if multiple applications are interested in receiving the multicast datagram, then one should use the reuse_address socket option.
Using the Boost.Asio examples as a reference, if one launches the sender with ./sender 239.255.0.1 and multiple receivers with ./receiver 0.0.0.0 239.255.0.1, then the following sockets and binds occur:
.----------.
.----------.|
.--------. address: any address: any .----------.||
| | port: any / \ port: 30001 | |||
| sender |-( ----------->| address: 239.255.0.1 |----------> )-| receiver ||'
| | \ port: 30001 / | |'
'--------' '----------'
The sender binds to any address and port. For instance, lets say the kernel binds it to port 24000.
The receiver binds to any address and port 30001 and joins the 239.255.0.1 multicast group.
The sender writes messages to 239.255.0.1:30001.
The receiver receives messages sent to 239.255.0.1:30001. The receiver's receive_from() operation's sender_endpoint argument will be populated with the sender's endpoint address and port 24000.
As far as performance goes, profiling the application would provide a definitive answer. The examples provided in the question are very different (synchronous vs. asynchronous), so directly comparing the two to determine which is faster may not be appropriate. In general, Boost.Asio will provide some overhead due to its abstractions. However, I have yet to work on an application where Boost.Asio's overhead was the problem, and its abstractions have saved me countless development and maintenance man-hours.

Issue with broadcast using Boost.Asio

I apologize in advance if the question has been previously answered, but I've searched and found nothing that helps me. As indicated by the question's title, I'm trying to broadcast a package from a server to a set of clients listening for any message.
The client will count the number of messages it receives during one second.
The server side of things goes like this:
class Server
{
public:
Server(boost::asio::io_service& io)
: socket(io, udp::endpoint(udp::v4(), 8888))
, broadcastEndpoint(address_v4::broadcast(), 8888)
, tickHandler(boost::bind(&Server::Tick, this, boost::asio::placeholders::error))
, timer(io, boost::posix_time::milliseconds(20))
{
socket.set_option(boost::asio::socket_base::reuse_address(true));
socket.set_option(boost::asio::socket_base::broadcast(true));
timer.async_wait(tickHandler);
}
private:
void Tick(const boost::system::error_code&)
{
socket.send_to(boost::asio::buffer(buffer), broadcastEndpoint);
timer.expires_at(timer.expires_at() + boost::posix_time::milliseconds(20));
timer.async_wait(tickHandler);
}
private:
udp::socket socket;
udp::endpoint broadcastEndpoint;
boost::function<void(const boost::system::error_code&)> tickHandler;
boost::asio::deadline_timer timer;
boost::array<char, 100> buffer;
};
It is initialized and run in the following way:
int main()
{
try
{
boost::asio::io_service io;
Server server(io);
io.run();
}
catch (const std::exception& e)
{
std::cerr << e.what() << "\n";
}
return 0;
}
This (apparently) works fine. Now comes the client...
void HandleReceive(const boost::system::error_code&, std::size_t bytes)
{
std::cout << "Got " << bytes << " bytes\n";
}
int main(int argc, char* argv[])
{
if (argc != 2)
{
std::cerr << "Usage: " << argv[0] << " <host>\n";
return 1;
}
try
{
boost::asio::io_service io;
udp::resolver resolver(io);
udp::resolver::query query(udp::v4(), argv[1], "1666");
udp::endpoint serverEndpoint = *resolver.resolve(query);
//std::cout << serverEndpoint.address() << "\n";
udp::socket socket(io);
socket.open(udp::v4());
socket.bind(serverEndpoint);
udp::endpoint senderEndpoint;
boost::array<char, 300> buffer;
auto counter = 0;
auto start = std::chrono::system_clock::now();
while (true)
{
socket.receive_from(boost::asio::buffer(buffer), senderEndpoint);
++counter;
auto current = std::chrono::system_clock::now();
if (current - start >= std::chrono::seconds(1))
{
std::cout << counter << "\n";
counter = 0;
start = current;
}
}
}
catch (const std::exception& e)
{
std::cerr << e.what() << "\n";
}
This works when running both the server and client on the same machine, but doesn't when I run the server on a machine different from that of where I run the client.
First thing is, it seems odd to me that I have to resolve the server's address. Perhaps I don't know how broadcasting really works, but I thought the server would send a message using its socket with the broadcast option turned on, and it would arrive to all the sockets in the same network.
I read you should bind the client's socket to the address_v4::any() address. I did, it doesn't work (says something about a socket already using the address/port).
Thanks in advance.
PS: I'm under Windows 8.
I am a bit surprised this works on the same machine. I would not have expected the client, listening to port 1666, to receive data being sent to the broadcast address on port 8888.
bind() assigns a local endpoint (composed of a local address and port) to the socket. When a socket binds to an endpoint, it specifies that the socket will only receive data sent to the bound address and port. It is often advised to bind to address_v4::any(), as this will use all available interfaces for listening. In the case of a system with multiple interfaces (possible multiple NIC cards), binding to a specific interface address will result in the socket only listening to data received from the specified interface[1]. Thus, one might find themselves obtaining an address through resolve() when the application wants to bind to a specific network interface and wants to support resolving it by providing the IP directly (127.0.0.1) or a name (localhost).
It is important to note that when binding to a socket, the endpoint is composed of both an address and port. This is the source of my surprise that it works on the same machine. If the server is writing to broadcast:8888, a socket bound to port 1666 should not receive the datagram. Nevertheless, here is a visual of the endpoints and networking:
.--------.
.--------.|
.--------. address: any address: any .--------.||
| | port: any / \ port: 8888 | |||
| server |-( ----------->| address: broadcast |----------> )-| client ||'
| | \ port: 8888 / | |'
'--------' '--------'
The server binds to any address and any port, enables the broadcast option, and sends data to the remote endpoint (broadcast:8888). Clients bound to the any address on port 8888 should receive the data.
A simple example is as follows.
The server:
#include <boost/asio.hpp>
int main()
{
namespace ip = boost::asio::ip;
boost::asio::io_service io_service;
// Server binds to any address and any port.
ip::udp::socket socket(io_service,
ip::udp::endpoint(ip::udp::v4(), 0));
socket.set_option(boost::asio::socket_base::broadcast(true));
// Broadcast will go to port 8888.
ip::udp::endpoint broadcast_endpoint(ip::address_v4::broadcast(), 8888);
// Broadcast data.
boost::array<char, 4> buffer;
socket.send_to(boost::asio::buffer(buffer), broadcast_endpoint);
}
The client:
#include <iostream>
#include <boost/asio.hpp>
int main()
{
namespace ip = boost::asio::ip;
boost::asio::io_service io_service;
// Client binds to any address on port 8888 (the same port on which
// broadcast data is sent from server).
ip::udp::socket socket(io_service,
ip::udp::endpoint(ip::udp::v4(), 8888 ));
ip::udp::endpoint sender_endpoint;
// Receive data.
boost::array<char, 4> buffer;
std::size_t bytes_transferred =
socket.receive_from(boost::asio::buffer(buffer), sender_endpoint);
std::cout << "got " << bytes_transferred << " bytes." << std::endl;
}
When the client is not co-located with the server, then it could be a variety of network related issues:
Verify connectivity between the server and client.
Verify firewall exceptions.
Verify broadcast support/exceptions on the routing device.
Use a network analyzer tool, such as Wireshark, to verify that the time to live field in the packets is high enough that it will not be discarded during routing.
1. On Linux, broadcast datagrams received by an adapter will not be passed to a socket bound to a specific interface, as the datagram's destination is set to the broadcast address. On the other hand, Windows will pass broadcast datagrams received by an adapter to sockets bound to a specific interface.

winsock: connect fails with error 10049 when using localhost (127.0.0.1)

i wrote a class encapsulating some of the winsock functions to imitate a simple TCP socket for my needs...
When i try to run a simple connect-and-send-data-to-server test the "client" fails on its call to connect with the error code of 10049 (WSAEADDRNOTAVAIL) connect function on MSDN
What I am doing is (code below):
Server:
Create a Server Socket -> Bind it to Port 12345
Put the Socket in listen mode
Call accept
Client
Create a socket -> Bind it to a random port
Call Connect: connect to localhost, port 12345
=> the call to connect fails with Error 10049, as described above
Here is the main function including the "server":
HANDLE hThread = NULL;
Inc::CSocketTCP ServerSock;
Inc::CSocketTCP ClientSock;
try
{
ServerSock.Bind(L"", L"12345");
ServerSock.Listen(10);
//Spawn the senders-thread
hThread = (HANDLE)_beginthreadex(nullptr, 0, Procy, nullptr, 0, nullptr);
//accept
ServerSock.Accept(ClientSock);
//Adjust the maximum packet size
ClientSock.SetPacketSize(100);
//receive data
std::wstring Data;
ClientSock.Receive(Data);
std::wcout << "Received:\t" << Data << std::endl;
}
catch(std::exception& e)
{...
Client thread function
unsigned int WINAPI Procy(void* p)
{
Sleep(1500);
try{
Inc::CSocketTCP SenderSock;
SenderSock.Bind(L"", L"123456");
SenderSock.Connect(L"localhost", L"12345");
//Adjust packet size
SenderSock.SetPacketSize(100);
//Send Data
std::wstring Data = L"Hello Bello!";
SenderSock.Send(Data);
}
catch(std::exception& e)
{
std::wcout << e.what() << std::endl;
}...
The Connect-Function
int Inc::CSocketTCP::Connect(const std::wstring& IP, const std::wstring& Port)
{
//NOTE: assert that the socket is valid
assert(m_Socket != INVALID_SOCKET);
//for debuggin: convert WStringToString here
std::string strIP = WStringToString(IP), strPort = WStringToString(Port);
Incgetaddrinfo AddyResolver;
addrinfo hints = {}, *pFinal = nullptr;
hints.ai_family = AF_INET;
//resolve the ip/port-combination for the connection process
INT Ret = AddyResolver(strIP.c_str(), strPort.c_str(), &hints, &pFinal);
if(Ret)
{
//error handling: throw an error description
std::string ErrorString("Resolving Process failed (Connect): ");
ErrorString.append(Inc::NumberToString<INT>(Ret));
throw(std::runtime_error(ErrorString.c_str()));
}
/*---for debbuging---*/
sockaddr_in *pP = (sockaddr_in*)(pFinal->ai_addr);
u_short Porty = ntohs(pP->sin_port);
char AddBuffer[20] = "";
InetNtopA(AF_INET, (PVOID)&pP->sin_addr, AddBuffer, 20);
/*--------------------------------------------------------*/
if(connect(m_Socket, pFinal->ai_addr, pFinal->ai_addrlen) == SOCKET_ERROR)
{
int ErrorCode = WSAGetLastError();
if((ErrorCode == WSAETIMEDOUT) || (ErrorCode == WSAEHOSTUNREACH) || (ErrorCode == WSAENETUNREACH))
{
//Just Unreachable
return TCP_TARGETUNREACHABLE;
}
//real errors now
std::string ErrorString("Connection Process failed: ");
ErrorString.append(Inc::NumberToString<int>(ErrorCode));
throw(std::runtime_error(ErrorString.c_str()));
}
return TCP_SUCCESS;
}
Additional Information:
-Incgetaddrinfo is a function object encapuslating getaddrinfo...
-Noone of the server functions return any error and work as expected when stepping through them using the debugger or when letting them run solely...
I'd be glad to hear your suggestions what the rpoblem might be...
Edit: It works when I dont connect to ("localhost","12345"), but to ("",12345)...
When look into the address resolution process of getaddrinfo it gives 127.0.0.1 for "localhost" and my real IP for ""
Why doesn't it work with my loopback-IP?
You have the answer in your question:
... it gives 127.0.0.1 for "localhost" and my real IP for ""
This means your server binds to the real IP of the interface instead of INADDR_ANY, i.e. it doesn't listen on the loopback.
Edit 0:
You don't really need name resolution when creating listening socket. Just bind() it to INADDR_ANY to listen on all available interfaces (including the loopback).
In my case, this error was caused by not calling htonl on INADDR_LOOPBACK before assigning it to address.sin_addr.s_addr.
Make sure you convert to network byte order, or you'll get 0x0100007f (1.0.0.127) instead of 0x7f000001 (127.0.0.1) for your loopback address and the bind will fail with code 10049 (WSAEADDRNOTAVAIL).

Server won't connect to more than one client?

The problem is it only connects to one client instead of two. Can anyone help me figure out why?
Server:
#include <SFML/System.hpp>
#include <SFML/Network.hpp>
#include <iostream>
void sendInfo(void *UserData)
{
sf::IPAddress* ip = static_cast<sf::IPAddress*>(UserData);
// Print something...
while(true){
// Create the UDP socket
sf::SocketUDP Socket;
// Create bytes to send
char Buffer[] = "sending info.";
// Send data to "192.168.0.2" on port 4567
if (Socket.Send(Buffer, sizeof(Buffer), *ip, 4444) != sf::Socket::Done)
{
// Error...
}
}
}
void receiveInfo(void *userData)
{
// Print something...
while(true){
// Create the UDP socket
sf::SocketUDP Socket;
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
char Buffer[128];
std::size_t Received;
sf::IPAddress Sender;
unsigned short Port;
if (Socket.Receive(Buffer, sizeof(Buffer), Received, Sender, Port) != sf::Socket::Done)
{
// Error...
}
// Show the address / port of the sender
std::cout << Buffer << std::endl;
Socket.Close();
}
}
int main()
{
sf::IPAddress client[2];
int connected = 0;
while(connected < 2){
// Create the UDP socket
sf::SocketUDP Socket;
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
char Buffer[128];
std::size_t Received;
sf::IPAddress Sender;
unsigned short Port;
if (Socket.Receive(Buffer, sizeof(Buffer), Received, Sender, Port) != sf::Socket::Done)
{
// Error...
}
// Show the address / port of the sender
client[connected] = Sender;
Socket.Close();
sf::Thread* send = new sf::Thread(&sendInfo, &client[connected]);
sf::Thread* receive = new sf::Thread(&receiveInfo, &client[connected]);
// Start it !
send->Launch();
receive->Launch();
connected++;
}
while(true){
}
return EXIT_SUCCESS;
}
Client:
#include <SFML/System.hpp>
#include <SFML/Network.hpp>
#include <iostream>
void sendInfo(void *UserData)
{
// Print something...
while(true){
// Create the UDP socket
sf::SocketUDP Socket;
// Create bytes to send
char Buffer[] = "client sending info.";
// Send data to "192.168.0.2" on port 4567
if (Socket.Send(Buffer, sizeof(Buffer), "127.0.0.1", 4444) != sf::Socket::Done)
{
// Error...
}
}
}
void receiveInfo(void *userData)
{
// Print something...
while(true){
// Create the UDP socket
sf::SocketUDP Socket;
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
char Buffer[128];
std::size_t Received;
sf::IPAddress Sender;
unsigned short Port;
if (Socket.Receive(Buffer, sizeof(Buffer), Received, Sender, Port) != sf::Socket::Done)
{
// Error...
}
// Show the address / port of the sender
std::cout << Buffer << std::endl;
Socket.Close();
}
}
int main()
{
// Create the UDP socket
sf::SocketUDP Socket;
// Create bytes to send
char Buffer[] = "Client Joined.";
// Send data to "192.168.0.2" on port 4567
if (Socket.Send(Buffer, sizeof(Buffer), "127.0.0.1", 4444) != sf::Socket::Done)
{
// Error...
}
sf::Thread* send = new sf::Thread(&sendInfo);
sf::Thread* receive = new sf::Thread(&receiveInfo);
// Start it !
send->Launch();
receive->Launch();
while(true){
}
return EXIT_SUCCESS;
}
First things first: is this a chat server or a 'more typical' server?
If this is a chat server, then you either need to have a list of sockets that are connected to the clients (you can connect UDP sockets using the connect() call, very convenient, and it also helps reduce the chances of spoofed peers) or a list of all the client addresses that you can supply to sendto() or sendmsg().
More 'typical' servers won't try to send messages to any client except the one that most recently made a request: those servers typically don't save anything from the clients, and instead will use recvfrom() or recvmsg() to get the peer's address for use in later sendto() or sendmsg() calls.
Also, most protocols only rely on one well known port; the server uses one specific port by convention, but clients select whatever port is open and free. FTP relies heavily on well-known ports on the client-side as well, and as a result is a gigantic pain to tunnel through Network Address Translation firewalls.
It isn't just academic: both your client and your server are attempting to bind() to port 4444. This means you need at least two IP addresses on a single machine to test, or use virtualization software to run an entirely separate machine on the same hardware, or just have two machines available. It's more work than it needs to be, and there's no reason for the clients to care about their local port numbers:
Server:
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
Client:
// Bind it (listen) to the port 4567
if (!Socket.Bind(4444))
{
// Error...
}
Poof! These two will never run on the same host without significant tricks. I expect your "it connects to one" is probably just the server or the client connecting to itself, but without some code to fill in those // Error blocks, it'd be tough to tell for sure.
(And while we're here, I'd like to take an aside to talk about comments; comments that simply re-state what the code does aren't very useful. You'll note that most of your comments are in fact wrong, referring to wrong IPs or ports. Some just don't add any information:
// Create the UDP socket
sf::SocketUDP Socket;
I know we're taught to add comments, but sadly we're not always taught what kind of comments to add. The only comment in both programs that I'd recommend even keeping would be this one, slightly amended:
// udp doesn't require listen or accept
if (!Socket.Bind(4444))
It isn't obvious from reading the code, and won't be wrong when the port number is read out of an environment variable, command line parameter, configuration file, or registry. (It might be too redundant in a team of people familiar with the sockets API, but might be gold for a programmer not that familiar with the differences between UDP and TCP.)
Good function names, variable names, etc., will win over comments almost every time. End of aside. :)
And now, the more minor nit-picking: your thread handlers are doing some tasks like this:
while(1) {
socket s;
bind s;
r = recv s;
print r;
close s;
}
This needless creation, binding, and closing, is all wasted energy, both the computer's energy and (much more importantly) your energy. Consider the following two re-writings:
recv_thread() {
socket s;
bind s;
while (1) {
r = recv s;
print r;
}
close s;
}
or
recv_thread(s) {
while (1) {
r = recv s;
print r;
}
}
/* ... */
socket s;
bind s;
sf::Thread* rt = new sf::Thread(&recv_thread);
rt->Launch(s);
The first option is a simple refactoring of your existing code; it keeps the socket creation and destruction in the thread function, but moves the loop invariants out of the loop. The code inside the loop now does only what is necessary.
The second option is a more drastic reworking: it moves the socket creation to the main thread, where error-handling is probably much easier, and the thread function only does exactly what the remote peer needs that thread to do. (If you wanted to change from UDP to TCP, the second option would be by far the much easier easier one to change -- your threading code might not need any modifications at all.)
I hope this helps. :)