Quickly determine access to ntp server - c++

I'm trying to create a method that determines access to the ntp server. I made a simple method, but if there is no connection, then it waits a long time for an answer - 5 seconds. I check 5 servers like this, for example - time.nist.gov. We have to wait a very long time.
Question: is there an easy way to check to avoid waiting so long, about 1-2 seconds?
bool is_connection(char* url)
{
// time.nist.gov
return gethostbyname(url) != NULL;
}

First, you should do all these checks in separated threads and join them to get the whole results in a single request.
Second, NTP uses UDP, so you can't check if the port (123 for NTP) is open or not, since UDP isn't a connected protocol - i.e. you don't have delivery results unless the server sends back another datagram to acknowledge your datagram. With TCP, you can "ping" a port to check if it's open, but not in UDP. You'll need to dive into RFC 1305 in order to be able to check that.
Resolving the name won't help you to check if it's a valid and working NTP server.
Anyway, your problem can be easily solved, but the solution is most likely dependent of your operating system (type, version, ...), your compiler (type, version, C++ standard used, ...), and the allowed C++ frameworks for your case (open bar, restricted, portable or not, ...).
I highly doubt that an EFFICIENT solution in pure portable C++ exists, in particular if you're stuck with old C++ standards. An efficient solution is more likely totally platform-dependent.
You should precise your working environment in order to get a more precise solution.

Related

How to wait for a value with timeout

I have a client/server program written in C++. I want to check the client response (an attribute of a C++ object) through a command send by the server, with a timeout if no response.
I am waiting for an expected value during some seconds. If the expected value is not observed, I need to return with a timeout. I was thinking about a thread and a poll to check the expected value in an specific time interval.
I wonder if C++11/14 features - std::promise, std::future, std::condition_variable or something else - can do it more easily for this case. The inconvenient i see about it is that i have to notice each changing value with a notify.
Well, i need some advice.
None of the C++ language features you mentioned can help in your scenario, because they are intended for interaction within a single running program - which may be multi-threaded, but not separated into two completely independent processes.
However, the networking library you are using (on the server side) might possibly have convenience facilities for doing this.
I realize this is a general and somewhat vague answer, but your question was also not very specific.
How to wait for a value with timeout
Within a process, one would typically use a condition variable.
I want to check the client response ... through a command send by the server
There is no standard way to communicate between processes in C++ (unless you count interaction with filesystem). As such, there is also no standard way to enforce a timeout on such communication.
Before you can know how to implement the timeout, you must figure out how you are going to communicate between the client and the server. That choice will be affected by what system you are targeting, so you should first figure that out.
If you are on a Linux environment you can try rpcgen and play with .x flies but you’ll have to study it a bit. Not sure for Windows env. Also you can use Dbus which is more intuitive.
[edit] Dbus or probably libdbus for you is an IPC cross platform toolkit or library that can fit your need. RPCGEN is an old tool that does the same thing but more complicated. I don’t have a snippet, I apologize but you can search for “qt dbus example”.
About the first requirement, server waits for a response with a timeout.
Have you tried select() or poll(). They can help us to monitor the socket connection between server and client in a period.
Or we can use signal() and alarm(), to check the response after a few seconds.
In Bekerley API, combine setsockopt() with SO_RCVTIMEO, SO_SNDTIMEO can also set the timeout for the request.
I'm not sure about the library you are implementing, but I hope it has any similar functions.
The second requirement, you are waiting for expected value for a duration.
I think condition variable is a good solution for this.
Why not using boost::thread with a timed_join?
boost::thread server_thread(::server_checker_method, arg1, arg2, arg3);
if (server_thread.timed_join(boost::posix_time::milliseconds(1000))) // wait for 1s
{
// Expected value found in the server in less than 1s
}
else
{
// Checking expected value took more than 1s, timeout !!!
}
You can put your checking mechanism in the server_checker_method and return if the expected values are OK. Otherwise, iterate over the loop until the timeout reaches.

Determining if a file exists on a network drive without a 20 second timeout

Is there an easy way to determine if a file on a remote system exists without a 20-25 second hang if it doesn't?
Functions like...
PathFileExists();
GetFileAttributes();
...don't allow you to set a timeout duration, so when the file doesn't exist you end up waiting for a long time. I think it might be possible to put one of these calls into a thread and set the thread to expire after 1 second (or whatever), but I'd prefer to use a lightweight native Windows function or boost function rather than an inelegant threading solution.
It's a bit hard to prove a negative, but I will argue that no such method exists.
The normal Windows asynchronous I/O method uses the OVERLAPPED structure, and in its documentation it references the ReadFile and WriteFile methods. From the other side, no variant of GetFileAttributes mentions OVERLAPPED or asynchronous I/O. Hence, it seems safe to assume it is always synchronous.
AFAIK no, generally there’s no easy way.
If your server is configured to responds to pings, you can use IcmpSendEcho API to ping the server before accessing it’s shared files, the API’s quite simple and it accepts the timeout.
If your server doesn’t respond to pings (by default modern versions of Windows don’t), you can write a function that tries to connect to TCP port 135 or 445, if connected closes the connection and returns success, if failed returns error. This will allow you to implement shorter timeout than the default.
In both methods, you’ll need to resolve the network drive path into the name of the server, see e.g. GetVolumePathName API.

Socket programming beginners questions

I'm really new to this whole socket and server development, I'm not yet familiar with how it all works.
I made a simple flash application that needs to communicate with a socket,
With that, I used a socket that supports AS3 and works on "Red Tamarin",
Well I'll get to the point:
I currently have a loop that always runs socket.receive()
It responds and even displays text that I send from my flash application.
My goal is to get a simple online flash game,
Probably use SQL / SQLite to save information and export it to players,
What I don't understand is how I can take it there..
What I thought I'll need to do is something like so:
On the server side:
Have a loop that runs as long as the server is alive, that loop should always check every connection it has with clients and wait for commands coming from them, such as log in, update player position, disconnect, request list of objects in given positions
Client side:
Send information to the server according to the action, like when a player moves, send the new position to the server in a similar way to this : "MovePlayer[name][x][y]"
Is my plan really how things should be?
And about the actual information being sent, I'm curious, will it be efficient to constantly send the server string data? (that's what I'm used to work with, not some weird bytes and stuff)
Thanks in advance!
You're on the right track. But I encourage you to first define a communication protocol. You can start by defining what a command looks like. For example:
COMMAND <space> PARAM1 <space> PARAM2 <line-break>
A few considerations on the protocol definition:
What if PARAM1 is a string and contains spaces? How can you tell the start and end of each parameter?
Your parameters could also contain a line-break.
If your client application is installed by your clients, they'll need to update it once in a while. To complicate even further, they may run an older version and expect it to work, even if you have changed your protocol. This imposes a need for protocol versioning. Keep that in mind if you require user interaction for updating the client part of your application.
These are the most fundamental considerations I can think for your scenario. There may be other important considerations, but most of them depend on how your game works. Feel free to amend my list if you think I forgot something OP should consider.
After defining what a command looks like, document all commands you believe your applications needs. Don't segregate definition of a command unless it becomes too complex or excessively long for some of your operations. Try to keep things simple.
Now back to your questions:
Is my plan really how things should be?
Yes. That's exactly how it should be.
And about the actual information being sent, I'm curious, will it be efficient to constantly send the server string data? (that's what I'm used to work with, not some weird bytes and stuff)
That depends on a number of factors:
Which protocol you're using (TCP, UDP, etc);
Number of concurrent clients;
Average time to process a command;
Do you broadcast updates to other players?
How you did implement your server application;
Physical contraints:
Hardware: CPU, memory, etc;
Network: bandwidth, latency, etc;
(source: it20.info)
look at this
https://code.google.com/p/spitfire-and-firedrop/
there you will see the basic of building a socket server with redtamarin
see in particular
https://code.google.com/p/spitfire-and-firedrop/source/browse/trunk/spitfire/src/spitfire/Server.as
the details is as follow, redtamarin basically use blocking sockets with select()
with a max hard coded FD_SETSIZE of 4096
see:
https://code.google.com/p/redtamarin/wiki/Socket#maxConcurrentConnection
so here what happen in your server loop
you basically have an array of sockets object
you loop every x milliseconds and for each socket
you ask if you can read it
if you can read on the socket, you then compare if this socket obj is the server
if it is the server that means you have a new connection
if not that means a client try to send you data and so you read this data
and then pass it to an "interpreter"
later in the same loop you check if the socket obj is still valid
and if you can write to it
if you can write and the socket object is not the server
then you can send data to the client
here the equivalent code in C for reference
http://martinbroadhurst.com/source/select-server.c.html
http://www.lowtek.com/sockets/select.html
for a very basic example look at socketpolicyd
https://code.google.com/p/spitfire-and-firedrop/wiki/socketpolicyd
https://code.google.com/p/spitfire-and-firedrop/source/browse/trunk/socketpolicyd/src/spitfire/SocketPolicyServer.as
and compare the implementation with Perl and PHP
http://www.adobe.com/devnet/flashplayer/articles/socket_policy_files.html

Simulate network conditions with a C/C++ Socket

I'm looking for a way to add network emulation to a socket.
The basic solution would be some way to add bandwidth limitation to a connection.
The ideal solution for me would:
Support advanced network properties (latency, packet-loss)
Open-source
Have a similar API as standard sockets (or wraps around them)
Work on both Windows and Linux
Support IPv4 and IPv6
I saw a few options that work on the system level, or even as proxy (Dummynet, WANem, neten, etc.), but that won't work for me, because I want to be able to emulate each socket manually (for example, open one socket with modem emulation and one with 3G emulation. Basically I want to know how these tools do it.
EDIT: I need to embed this functionality in my own product, therefore using an extra box or a third-party tool that needs manual configuration is not acceptable. I want to write code that does the same thing as those tools do, and my question is how to do it.
Epilogue: In hindsight, my question was a bit misleading. Apparently, there is no way to do what I wanted directly on the socket. There are two options:
Add delays to send/receive operation (Based on #PaulCoccoli's answer):
by adding a delay before sending and receiving, you can get a very crude network simulation (constant delay for latency, delay sending, as to not send more than X bytes per second, for bandwidth).
Paul's answer and comment were great inspiration for me, so I award him the bounty.
Add the network simulation logic as a proxy (Based on #m0she and others answer):
Either send the request through the proxy, or use the proxy to intercept the requests, then add the desired simulation. However, it makes more sense to use a ready solution instead of writing your own proxy implementation - from what I've seen Dummynet is probably the best choice (this is what webpagetest.org does). Other options are in the answers below, I'll also add DonsProxy
This is the better way to do it, so I'm accepting this answer.
You can compile a proxy into your software that would do that.
It can be some implementation of full fledged socks proxy (like this) or probably better, something simpler that would only serve your purpose (and doesn't require prefixing your communication with the destination and other socks overhead).
That code could run as a separate process or a thread within your process.
Adding throttling to a proxy shouldn't be too hard. You can:
delay forwarding of data if it passes some bandwidth limit
add latency by adding timer before read/write operations on buffers.
If you're working with connection based protocol (like TCP), it would be senseless to drop packets, but with a datagram based protocol (UDP) it would also be simple to implement.
The connection creation API would be a bit different from normal posix/winsock (unless you do some macro or other magic), but everything else (send/recv/select/close/etc..) is the same.
If you're building this into your product, then you should implement a layer of abstraction over the sockets API so you can select your own implementation at run time. Alternatively, you can implement wrappers of each socket function and select whether to call your own version or the system's version.
As for adding latency, you could have your implementation of the sockets API spin off a thread. In that thread, have a priority queue ordered by time (i.e. this background thread does a very basic discrete event simulation). Each "packet" you send or receive could be enqueued along with a delivery time. Each delivery time should have some amount of delay added. I would use some kind of random number generator with a Gaussian distribution.
The background thread would also have to simulate the other side of the connection, though it sounds like you may have already implemented that part?
I know only Network Link Conditioner for Mac OS X Lion. You should be mac developer to download it, so i cannot put download link there. Only description from 9to5mac.com: http://9to5mac.com/2011/08/10/new-in-os-x-lion-network-link-conditioner-utility-lets-you-simulate-internet-and-bandwidth-conditions/
This answer might be a partial solution for you when using linux:
Simulate delayed and dropped packets on Linux. It refers to a kernel module called netem, which can simulate all kinds of network problems.
If you want to work with TCP connections, having "packet loss" could be problematic since a lot of error-handling (like recovering lost packages) is done in the kernel. Simulating this in a cross-platform way could be hard.
you usually add a network device to your network that throttles the bandwidth or latency, on a port by port basis, you can then achieve what you want just by connecting to the port allocated to the particular type of crappy network you want to test, with no code changes or modifications required.
The easiest ways to do this is just add iptables rules to a Linux server acting as a proxy.
If you want it to work without the separate device, try trickle that is a software package that throttles your network on your client PC. (or for Windows)
You may would like to check WANem http://wanem.sourceforge.net/ . WANEM is Open Source and licensed under the GNU General Public License.
WANem allows the application development team to setup a transparent application gateway which can be used to simulate WAN characteristics like Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc.
I think you could use a tool like Network Simulator. It's free, for Windows.
The only thing to do is to setup your program to use the right ports (and the settings for the network, of course).
If you want a software only solution that you control, you will have to implement it yourself. I know of no such existing package.
While a wrapper layer over a socket may give you the ability to introduce delay, it won't be sufficient to introduce loss or out of order delivery. In order to simulate those activities, you actually need intercept the data in transit between the two TCP stacks.
The approach I would recommend is to use a tunneling device (say tunX). Routes should be set so the client believes the way to the server is through tunX. Additional code (perhaps running in a different thread) would promiscuously intercept traffic on tunX, and perform your augmented behavior, before forwarding packets over the true physical interface that will get the traffic to your server. The reverse would happen for packets arriving from the server on the physical interface. Those packets would be intercepted by the client code, behavior augmented, before forwarding through tunX.
However, since you are testing client software, I am unclear as to why you would want to embed this code in your released software, unless the software itself is a WAN simulating client.

How to count SYN/ESTABLISHED connection to server?

I want to get number of SYN and ESTABLISHED connection to my server with C/C++. But I don't want to call popen to run netstat, or any other Linux command. I've managed to scan /proc/net/ip_conntrack and get the numbers. But I realize that scanning ip_conntrack requires great resources, each time my application invoke that method. Is there any other simple way?
Scanning /proc/net/ip_conntrack is not reliable because it only works if netfilter/connection tracking is enabled. And it doesn't only count connections to your server but also through your server (if it's acting as a router).
Better would be to get the information in the same places as netstat does: /proc/net/tcp, /proc/net/tcp6 (and similar files for UDP and other protocols if you care about those). That amount more or less to reimplementing netstat inside your application though. You have to wonder if it's worth it. Also, it's portable (more or less) to call netstat whereas reading those files directly is Linux-specific.
I know you are concerned about the resources requires to scan the full table every time, but I don't think there's a say to "subscribe" and get notifications when new connections are established or torn down. The closest thing I can think of to something like that would be to sniff the network interface (using libpcap) and keeping track of connection setups and teardowns yourself.