How to know if a server is online or not, if we only know the IP? - c++

How can that be done, the server is not an HTTP server, its a ArmA game server.
I tried to achieve it using CURL in the following code, but it didn't work, it always show Offline.
IsOnline( "xx.xxx.xx.xxx" );
bool IsOnline( string url )
{
CURL *curl;
curl = curl_easy_init();
if(curl)
{
curl_easy_setopt(curl, CURLOPT_URL, url.c_str() );
CURLcode result = curl_easy_perform(curl);
if ( result != CURLE_OK )
{
error_string = curl_easy_strerror( result );
return false;
}
curl_easy_cleanup(curl);
curl_global_cleanup();
return true;
}
curl_easy_cleanup(curl);
curl_global_cleanup();
return false;
}

What you're doing in your code is trying to send an HTTP request. The reason it fails is because the game server machine isn't running a HTTP server.
The usual way to find out whether a machine is online or not is by pinging it. Ping uses the ICMP protocol. Here is a tutorial explaining how to ping from a C++ program (it's Windows-specific): http://www.developerfusion.com/article/4628/how-to-ping/
However, the admins of the server might have disabled ICMP, in which case the machine won't respond to pings even if it's online. Moreover, you can get "false positives" if the machine is online and responding to pings, but the game server software itself isn't running.
So I think your best bet would be to find out which ports the game server itself listens to and attempt a connection on those ports.

Related

Slow Performance with libcurl when changing Ip Address

First time developing with libcurl and got some unexpected results:
I'm trying to reuse the same CURL handle in order to reuse it's connections associated with it. However, I noticed a huge performance drop when I constantly change the Ip address of my request.
When I keep the IP address unchanged everything is fine, I'm pretty sure there's something I'm missing, but I don't know what and where to search for.
Here's my code:
// curl init
curl_easy_setopt(curl, CURLOPT_NOSIGNAL, 1L);
curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, &RestClientPool::WriteCallBack); // custom write function
curl_easy_setopt(curl, CURLOPT_TCP_KEEPALIVE, 1L);
curl_easy_setopt(curl, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_1);
curl_easy_setopt(curl, CURLOPT_CONNECTTIMEOUT_MS, 200L);
// perform request, curl handle is stored and reused
curl_easy_setopt(curl, CURLOPT_HTTPGET, 1L);
curl_easy_setopt(curl, CURLOPT_URL, req_url.c_str()); // here req_url is something like http://123.123.123.123:8080/api/api
curl_easy_setopt(curl, CURLOPT_WRITEDATA, &response);
CURLcode res = curl_easy_perform(curl);
Problem:
say i have 3 hosts with ip: a, b and c, if i send requests to a,b,c,a,b,c,a... 100 times V.S. a,a,a,a,a,a... 100 times with the same handle, performance is around 9s to 1s (service on all hosts are identical, no host issues needs to be considered)
PS: I can tell that libcurl handle is reusing the connections by monitoring(and by verbose), because i monitored the servers. So, new connections when ip change shouldn't be the problem here.
Environment:
libcurl:
curl 7.86.0 (x86_64-pc-linux-gnu) libcurl/7.86.0 OpenSSL/1.0.2k-fips zlib/1.2.7
Release-Date: 2022-10-26
Protocols: dict file ftp ftps gophergophers http https imap imaps mqtt pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: alt-svc AsynchDNS HSTS HTTPS-proxy IPv6 Largefile libz NTLM NTLM_WB SSL UnixSockets
os:
CentOS 7.6.1810
What I tried:
1、I trie using CURLOPT_RESOLVE(cause i thought it might have something do with DNS cache), but nothing changed.
2、I made three handle and fired off 3 threads, where each handle would request to same ip(host A for example) repeatedly(no handle were shared across threads). And it worked fine.
However, when I change the ip for each handle(thread 1 would now repeatedly make request to host A, thread 2 to B and thread 3 to C), the performance drop appears again.

C++ libcurl: curl_easy_perform() returned CURLE_OK eventhough it received response code 226

I'm sending files from linux server to windows remote system using libcurl FTP.
Below is the code
curl_easy_setopt(CurlSessionHandle, CURLOPT_URL, remoteFileUrl);
curl_easy_setopt(CurlSessionHandle, CURLOPT_UPLOAD, ON);
// Set the input local file handle
curl_easy_setopt(CurlSessionHandle, CURLOPT_READDATA, localFileHandle);
// Set on/off all wanted options
// Enable ftp data connection
curl_easy_setopt(CurlSessionHandle, CURLOPT_NOBODY, OFF);
// Create missing directory into FTP path
curl_easy_setopt(CurlSessionHandle, CURLOPT_FTP_CREATE_MISSING_DIRS , ON) ;
// Set the progress function, in order to check the stop transfer request
curl_easy_setopt(CurlSessionHandle, CURLOPT_NOPROGRESS, OFF);
curl_easy_setopt(CurlSessionHandle, CURLOPT_PROGRESSFUNCTION, progressCb);
curl_easy_setopt(CurlSessionHandle, CURLOPT_PROGRESSDATA, this);
CURLcode Result = curl_easy_perform(CurlSessionHandle);
====================================================================
few files are not transferred to remote but i didn't receive any error from curl_easy_perform(). This is happening randomly.
I have collected the wireshark logs, trace shows [RST, ACK] was sent from our side to remote system don't know the reason and response code 226 was sent from remote system, i think i should receive some error code from curl_easy_perform()
instead of CURLE_OK. Please correct me if i'm wrong.
Please check the image which has wireshark traces.
Source IP: 82 is Linux server & Destination IP: 87 is Windows remote system
I would like to know why we are sending [RST, ACK] to remote and why libcurl is not returning error code. Can someone explain to me is there a way to handle this problem.
I have uploaded images of success and failure case. Please check and let me know
Failure case, Check the response arg

Mix mode communication between IPv4 and IPv6

I have a application which acts as client and server both. As a server it accepts SOAP requests on port xxxx[Agent URL] and send notifications to the sender on port yyyy [notification URL].
So basically it acts as a server on port xxxx and client on port yyyy. My service has a dedicated IP either IPv6 or IPv4.
We are using GSOAP for communication and overriding GSOAP function tcp_connect() for client side binding.
Currently I am facing issues with transition of service to IPv6. Use case: when I listening on IPv6 address and my notification URL is IPv4...
From the GSOAP implementation a socket is created from the Notification URL.
sk = socket(res->ai_family, res->ai_socktype, res->ai_protocol);
Now we try to bind to this socket accordingly(either IPv4 or IPv6):
struct addrinfo hints, *res, *p;
int status;
const char* client_ip = AGENT_CLIENT_IP;
memset(&hints, 0, sizeof(hints));
hints.ai_family = AF_UNSPEC;
hints.ai_socktype = SOCK_STREAM;
if( (status=getaddrinfo(client_ip, NULL, &hints, &res))!=0 )
{
sprintf(err_msg_buf,"bind failed in tcp_connect()");
soap->fclosesocket(soap, sk);
return error;
}
for( p=res; p!=NULL; p=p->ai_next){
if(p->ai_family == AF_INET)
{
struct sockaddr_in * ipv4 = (struct sockaddr_in *)p->ai_addr;
status = bind(sk, ipv4, (int)sizeof(struct sockaddr_in));
}
else if(p->ai_family == AF_INET6)
{
struct sockaddr_in6 * ipv6 = (struct sockaddr_in6 *)p->ai_addr;
status = bind(sk, ipv6, (int)sizeof(struct sockaddr_in6));
}
else
{
sprintf(err_msg_buf,"tcp_connect() error. IP Address neither IPv6 nor IPv4 ");
soap->fclosesocket(soap, sk);
return error;
}
break;
}
if(-1 == status)
{
sprintf(err_msg_buf," Binding to client host ip failed in tcp_connect()");
return error;
}
Since the socket is already created(according to the type of Notification URL), bind fails if the socket is of type mismatch is there.
How can I make my client side binding work when socket family and agent ip address are of different family ?
Maybe I am not getting what you are trying or you have some misunderstanding of how TCP/IP and RPC normally works.
Let me paraphrase your set up and then show what I think is odd about it.
You have a server and one or multiple clients. The server accepts IPv4 and IPV6 connections on a fixed port, let us say 1337. To answer the request you open a new TCP stream (or maybe SOAP) on a different fixed port, say 1338. You are now wondering why, when a second client connects the bind to 1338 fails?
The short answer is: "The port is in use, duh, us a different port!"
But that misses the point that the setup, to say the least ODD. Although I have never used GSOAP, I have used SOAP and other RPC frameworks and what you outline is weird, unless I am missing something you did not outline.
The first thing that is odd, if you need an answer to a SOAP request, why do you simply formulate one with a return value? Call the SOAP function and the client will block until it gets an answer. If you don't want the call to block for the relatively long duration of the call do the entire thing asynchronously.
So you want to pass data to the client back later? Here you have two solutions, either the client polls the server or you open a new SOAP connection to the client. The first solution is basically desirable, because in most cases the client can connect to the server but not the other way around. For example the client can be behind a NAT, what do you do now? The second solution works well when you know that the client will always be reachable.
It seems to me that you are trying to do the second "return channel" solution. In this case why are you binding to a port? The client side of any IP connection does not need to bound to a port. The OS will automatically assign an available port. What you need to do is then bind the port on the client to a well known IP. You then use this well known client port and use it in connect on the server (or not, since you are using SOAP).
Since this is all confusing let me illustrate this with a small diagram:
Client Server
------ ------
Request Channel <random port> 1337
Back Channel 1338 <random port>
To sum up:
So either you are reimplementing something that works in SOAP and should stop doing that or if you absolutely need a back channel, simply don't call bind on a client socket.

Unable to connect to server (over a remote connection)

I have been working on this project for a while and wanted to test some new features over a remote connection, but the client failed to connect (while it was able to connect in the past). Everything works fine locally. At the moment I am not able to port foward so I'm using hamachi. I have tried capturing the hamachi network traffic with wireshark, and the client requests do arrive, but the server doesn't receive them.
Any help is greatly appreciated.
Code (error checking left out to make the code more readable):
Client:
addrinfo ADDRESSINFO, *CLIENTINFO=NULL;
ZeroMemory(&ADDRESSINFO, sizeof(ADDRESSINFO));
ADDRESSINFO.ai_family = AF_INET;
ADDRESSINFO.ai_socktype = SOCK_STREAM;
ADDRESSINFO.ai_protocol = IPPROTO_TCP;
ConnectSocket = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
getaddrinfo(strIP.c_str(), strPort.c_str(), &ADDRESSINFO, &CLIENTINFO);
connect(ConnectSocket, CLIENTINFO->ai_addr, CLIENTINFO->ai_addrlen);
freeaddrinfo(CLIENTINFO);
Server:
addrinfo ADDRESSINFO, *SERVERINFO=NULL;
ZeroMemory(&ADDRESSINFO, sizeof(ADDRESSINFO));
ADDRESSINFO.ai_family = AF_INET;
ADDRESSINFO.ai_socktype = SOCK_STREAM;
ADDRESSINFO.ai_protocol = IPPROTO_TCP;
getaddrinfo(SERVER_IP, SERVER_PORT, &ADDRESSINFO, &SERVERINFO);
ListenSocket = socket(SERVERINFO->ai_family, SERVERINFO->ai_socktype, SERVERINFO->ai_protocol);
ConnectionSocket = socket(SERVERINFO->ai_family, SERVERINFO->ai_socktype, SERVERINFO->ai_protocol);
bind(ListenSocket, SERVERINFO->ai_addr, SERVERINFO->ai_addrlen);
freeaddrinfo(SERVERINFO);
listen( ListenSocket, SOMAXCONN )
while(true)
{
if(ConnectionSocket = accept(ListenSocket, NULL, NULL))
{
//do stuff
}
}
I do not know how abridged the code that you've pasted is but:
1) There is no place where you set destination address
2) There is no place you set destination port
3) To which port is server trying to bind?
...so this just cannot work at all.
Moreover please do handle errors - (yes you said you've omitted them on purpose) but I bet that if server refuses connection your error handling shows that. Otherwise it connects fine but you claim otherwise. You also say:
1) 'the client failed to connect'
2) and later you say 'the client requests do arrive, but the server doesn't receive them'
If you are able to connect - you should see 3 way handshake (TCP stream connection). If not error handling and wireshark will show that. You say that client requests do arrive but your code is not sending anything (no sending code available). You also say that server does not receive them - if it connects and you send anything there is no way that your error handling shows nothing and server receives nothing (but server code lacks any receive routine call).
I think right now you cannot receive much help with that. Update your code, verify if it really works locally (you mean loopback here right?), then test 'not locally', add error handling and use wireshark on both client and server side.

Does setting CURLOPT_URL force to create a second FTP connection?

I want to open a connection with an FTP server and download 2 different files. Names are totally different and I cannot use wildcards.
I expected I could set the hostname and the file, then call curl_easy_perform, then set the file again and call curl_easy_perform one last time.
However it seems I have to use the CURLOPT_URL which includes both the hostname and the filename.
My fear is that the following code (lacks the error checking just to be short here):
...
curl_easy_setopt(handle, CURLOPT_URL, "ftp://myserver//foo.dat");
curl_easy_perform(handle);
curl_easy_setopt(handle, CURLOPT_URL, "ftp://myserver//bar.png");
curl_easy_perform(handle);
opens the FTP connection twice, giving a lot of avoidable overhead.
So am I missing something here? Will libcurl notice that the hostname part is the same, thus avoiding to open the same connection twice? If not how can I open the connection only once?
Enabling CURLOPT_VERSBOSE showed that:
* Connection #0 to host 127.0.0.1 left intact
* Re-using existing connection! (#0) with host 127.0.0.1
* Connected to 127.0.0.1 (127.0.0.1) port 21 (#0)
* Request has same path as previous transfer
Also, wireshark showed that connection to port 21 is made only once and lasts throughout the whole transfer (including the two files).
However one connection per-file is made on another port because of the ftp passive mode, but I think this is not curl's fault.