simulate socket errors - c++

How to simulate socket errors? (sometimes server or client disconnects because of some socket error and it is impossible to reproduce.)
I was looking for a tool to do this, but I can't find one.
Does anyone know either of a tool or has a code example on how to do this? (C# or C/C++)

Add a wrapper layer to the APIs you're using to access the sockets and have them fail rand() % 100 > x percent of the time.

I had exactly the same problem this summer.
I had a custom Socket class and wanted to test what would happen if read or write threw an exception. I really wanted to mimic the Java mocking frameworks, and I did it like this:
I inherited the Socket class into a FakeSocket class, and created something called a SocketExpectation. Then, in the unit tests, I created fake sockets, set up the expectations and then passed that fake socket to the code I wanted to test.
The FakeSocket had these methods (stripped of unneeded details):
uint32_t write(buffer, length); // calls check
uint32_t read(buffer, length); // calls check
bool matches();
void expect(expectation);
uint32_t check(CallType, buffer, length) const;
They're all pretty straight forward. check checks the arguments against the current expectation and if everything is according to plan, proceeds to perform the SocketExpectation requirement.
The SocketExpectation has this outline (also stripped):
typedef enum { write, read } CallType;
SocketExpectation(CallType type);
SocketExpectation &with_arguments(void *a1, uint32_t a2); // expects these args
SocketExpectation &will_return(uint32_t value);
SocketExpectation &will_throw(const char * e); // test error handling
bool matches();
I added more methods as I needed them. I would create it like this, then pass the fake socket to the relevant method:
fake_socket = FakeSocket();
fake_socket.expect(SocketExpectation(write).with_arguments(....).will_return(...));
fake_socket.expect(SocketExpectation(read).with_arguments(...).will_throw("something"));

My socket code unit tests are probably better described as integration tests as I drive the code under test to connect to a 'mock' remote peer. Since the remote peer is under the control of the test (it's simply a simple client or server) I can have the test cause the remote peer to disrupt the connection in various ways and then ensure that the code under test reacts as expected. It takes a little work to set up, but once you have all the pieces in place it makes it pretty trivial to test most situations.
So, I guess, my suggestion is that rather than attempting to simulate the situations that you're encountering you should understand them and then reliably generate them.

Related

DNS-SD on Windows using MFC

I have an application built using MFC that I need to add Bonjour/Zeroconf service discovery to. I've had a bit of trouble figuring out how best to do it, but I've settled on using the DLL stub provided in the mDNSresponder source code and linking my application to the static lib generated by that (which in turn uses the system dnssd.dll).
However, I'm still having problems as the callbacks don't always seem to be being called so my device discovery stalls. What confuses me is that it all works absolutely fine under OSX, using the OSX dns-sd terminal service and under Windows using the dns-sd command line service. On that basis, I'm ruling out the client service as being the problem and trying to figure out what's wrong with my Windows code.
I'm basically calling DNSBrowseService(), then in that callback calling DNSServiceResolve(), then finally calling DNSServiceGetAddrInfo() to get the IP address of the device so I can connect to it.
All of these calls are based on using WSAAsyncSelect like this :
DNSServiceErrorType err = DNSServiceResolve(&client,kDNSServiceFlagsWakeOnResolve,
interfaceIndex,
serviceName,
regtype,
replyDomain,
ResolveInstance,
context);
if(err == 0)
{
err = WSAAsyncSelect((SOCKET) DNSServiceRefSockFD(client), p->m_hWnd, MESSAGE_HANDLE_MDNS_EVENT, FD_READ|FD_CLOSE);
}
But sometimes the callback just never gets called even though the service is there and using the command line will confirm that.
I'm totally stumped as to why this isn't 100% reliable, but it is if I use the same DLL from the command line. My only possible explanation is that the DNSServiceResolve function tries to call the callback function before the WSAAsyncSelect has registered the handling message for the socket, but I can't see any way around this.
I've spent ages on this and am now completely out of ideas. Any suggestions would be welcome, even if they're "that's a really dumb way to do it, why aren't you doing X, Y, Z".
I call DNSServiceBrowse, with a "shared connection" (see dns_sd.h for documentation) as in:
DNSServiceCreateConnection(&ServiceRef);
// Need to copy the main ref to another variable.
DNSServiceRef BrowseServiceRef = ServiceRef;
DNSServiceBrowse(&BrowseServiceRef, // Receives reference to Bonjour browser object.
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
kDNSServiceInterfaceIndexAny, // Browse on all network interfaces.
"_servicename._tcp", // Browse for service types.
NULL, // Browse on the default domain (e.g. local.).
BrowserCallBack, // Callback function when Bonjour events occur.
this); // Callback context.
This is inside a main run method of a thread class called ServiceDiscovery. ServiceRef is a member of ServiceDiscovery.
Then immediately following the above code, I have a main event loop like the following:
while (true)
{
err = DNSServiceProcessResult(ServiceRef);
if (err != kDNSServiceErr_NoError)
{
DNSServiceRefDeallocate(BrowseServiceRef);
DNSServiceRefDeallocate(ServiceRef);
ServiceRef = nullptr;
}
}
Then, in BrowserCallback you have to setup the resolve request:
void DNSSD_API ServiceDiscovery::BrowserCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* inName,
const char* inType,
const char* inDomain,
void* inContext)
{
(void) inServiceRef; // Unused
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
...
// Pass a copy of the main DNSServiceRef (just a pointer). We don't
// hang to the local copy since it's passed in the resolve callback,
// where we deallocate it.
DNSServiceRef resolveServiceRef = sd->ServiceRef;
DNSServiceErrorType err =
DNSServiceResolve(&resolveServiceRef,
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
inIFI,
inName,
inType,
inDomain,
ResolveCallBack,
sd);
Then in ResolveCallback you should have everything you need.
// Callback for Bonjour resolve events.
void DNSSD_API ServiceDiscovery::ResolveCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* fullname,
const char* hosttarget,
uint16_t port, /* In network byte order */
uint16_t txtLen,
const unsigned char* txtRecord,
void* inContext)
{
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
assert(sd);
// Save off the connection info, get TXT records, etc.
...
// Deallocate the DNSServiceRef.
DNSServiceRefDeallocate(inServiceRef);
}
hosttarget and port contain your connection info, and any text records can be obtained using the DNS-SD API (e.g. TXTRecordGetCount and TXTRecordGetItemAtIndex).
With the shared connection references, you have to deallocate each one based on (or copied from) the parent reference when you are done with them. I think the DNS-SD API does some reference counting (and parent/child relationship) when you pass copies of a shared reference to one of their functions. Again, see the documentation for details.
I tried not using shared connections at first, and I was just passing down ServiceRef, causing it to be overwritten in the callbacks and my main loop to get confused. I imagine if you don't use shared connections, you need to maintain a list of references that need further processing (and process each one), then destroy them when you're done. The shared connection approach seemed much easier.

OOP: proper class design for database connection in derived child class?

I'm coding a long-running, multi-threaded server in C++. It receives requests on a socket, does database lookups and returns responses on a socket.
The server reads various run information from a configuration file, including database connectivity parameters. I have to use a database abstraction class from the company's code library. I don't want to wait until trying to do the DB search to lazy instantiate the DB connection (due to not shown complexity and the need for error exit at startup if DB connection cannot be made).
My problem is how to get the database connection information down into the search class without doing any number of "ugly" or bad OOP things that would technically work. I want to learn how to do this right way.
Is there a good design pattern for doing this? Should I be using the "Parameterize from Above" pattern? Am I missing some simpler Composition pattern?
// Read config file.
// Open DB connection using config values.
Server::process_request(string request, string response) {
try {
Process process(request);
if (process.do_parse(response)) {
return REQ_OK;
} else {
// handle error
}
} catch (..,) {
// handle exceptions
}
}
class Process : public GenericRequest {
public:
Process(string *input) : generic_process(input) {};
bool do_parse(string &output);
}
bool Process::do_parse(string &output) {
// Parse the input request.
Search search; // database search object
search.init( search parameters from parsing above );
output = format_response(search.get_results());
}
class Search {
// must use the Database library connection handle.
}
How do I get the DB connection from the Server class at top into the Search class instance at the bottom of the pseudo-code above?
It seems that the problem you are trying to solve is one of objects dependency, and is well solved using dependency injection.
Your class Process requires an instance of Search, which must be configured somehow. Instead of having instances of Process allocating their own Search instance, it would be easier to have them receive a ready made one at construction time. The Process class won't have to know about the Search configuration details, and thus an unecessary dependency is avoided.
But then the problem cascades up to whichever object must create a Process, because now this one has to know that configuration detail! In your situation, it is not really a problem, since the Server class is the one creating Process instances, and it happens to know the configuration details for Search.
However, a better solution is to implement a specialized class - for instance DBService, which will encapsulate the DB details acquired from the configuration step, and provide a method to get ready made Search instances. With this setup, no other objects will depend on the Search class for its construction and configuration. As an added benefit, you can easily implement and inject a DBService mockup object which will help you build test cases.
class DBSearch {
/* implement/extends the Search interface/class wrt DB */
};
class DBService {
/* constructor reads up configuration details somehow: command line, file */
Search *newSearch(){
return new DBSearch(config); // search object specialized on db
}
};
The code above somewhat illustrates the solution. Note that the newSearch method is not constrained to build only a Search instance, but may build any object specializing that class (as for example the class DBSearch above). The dependency is there almost removed from Process, which now only needs to know about the interface of Search it really manipulates.
The central element of good OOP design highlighted here is reducing coupling between objects to reduce the amount of work needed when modifying or enhancing parts of the application,
Please look up for dependency injection on SO for more information on that OOP design pattern.

understanding RProperty IPC communication

i'm studying this source base. Basically this is an Anim server client for Symbian 3rd edition for the purpose of grabbing input events without consuming them in a reliable way.
If you spot this line of the server, here it is basically setting the RProperty value (apparently to an increasing counter); it seems no actual processing of the input is done.
inside this client line, the client is supposed to be receiving the notification data, but it only calls Attach.
my understanding is that Attach is only required to be called once, but is not clear in the client what event is triggered every time the server sets the RProperty
How (and where) is the client supposed to access the RProperty value?
After Attaching the client will somewhere Subscribe to the property where it passes a TRequestStatus reference. The server will signal the request status property via the kernel when the asynchronous event has happened (in your case the property was changed). If your example source code is implemented in the right way, you will find an active object (AO; CActive derived class) hanging around and the iStatus of this AO will be passed to the RProperty API. In this case the RunL function of the AO will be called when the property has been changed.
It is essential in Symbian to understand the active object framework and quite few people do it actually. Unfortunately I did not find a really good description online (they are explained quite well in Symbian OS Internals book) but this page at least gives you a quick example.
Example
In the ConstructL of your CMyActive subclass of CActive:
CKeyEventsClient* iClient;
RProperty iProperty;
// ...
void CMyActive::ConstructL()
{
RProcess myProcess;
TSecureId propertyCategory = myProcess.SecureId();
// avoid interference with other properties by defining the category
// as a secure ID of your process (perhaps it's the only allowed value)
TUint propertyKey = 1; // whatever you want
iClient = CKeyEventsClient::NewL(propertyCategory, propertyKey, ...);
iClient->OpenNotificationPropertyL(&iProperty);
// ...
CActiveScheduler::Add(this);
iProperty.Subscribe(iStatus);
SetActive();
}
Your RunL will be called when the property has been changed:
void CMyActive::RunL()
{
if (iStatus.Int() != KErrCancel) User::LeaveIfError(iStatus.Int());
// forward the error to RunError
// "To ensure that the subscriber does not miss updates, it should
// re-issue a subscription request before retrieving the current value
// and acting on it." (from docs)
iProperty.Subscribe(iStatus);
TInt value; // this type is passed to RProperty::Define() in the client
TInt err = iProperty.Get(value);
if (err != KErrNotFound) User::LeaveIfError(err);
SetActive();
}

Symbian C++ - synchronous Bluetooth discovery with timeout using RHostResolver

I am writing an application in Qt to be deployed on Symbian S60 platform. Unfortunately, it needs to have Bluetooth functionality - nothing really advanced, just simple RFCOMM client socket and device discovery. To be exact, the application is expected to work on two platforms - Windows PC and aforementioned S60.
Of course, since Qt lacks Bluetooth support, it has to be coded in native API - Winsock2 on Windows and Symbian C++ on S60 - I'm coding a simple abstraction layer. And I have some problems with the discovery part on Symbian.
The discovery call in the abstraction layer should work synchronously - it blocks until the end of the discovery and returns all the devices as a QList. I don't have the exact code right now, but I had something like that:
RHostResolver resolver;
TInquirySockAddr addr;
// OMITTED: resolver and addr initialization
TRequestStatus err;
TNameEntry entry;
resolver.GetByAddress(addr, entry, err);
while (true) {
User::WaitForRequest(err);
if (err == KErrHostResNoMoreResults) {
break;
} else if (err != KErrNone) {
// OMITTED: error handling routine, not very important right now
}
// OMITTED: entry processing, adding to result QList
resolver.Next(entry, err);
}
resolver.Close();
Yes, I know that User::WaitForRequest is evil, that coding Symbian-like, I should use active objects, and so on. But it's just not what I need. I need a simple, synchronous way of doing device discovery.
And the code above does work. There's one quirk, however - I'd like to have a timeout during the discovery. That is, I want the discovery to take no more than, say, 15 seconds - parametrized in a function call. I tried to do something like this:
RTimer timer;
TRequestStatus timerStatus;
timer.CreateLocal();
RHostResolver resolver;
TInquirySockAddr addr;
// OMITTED: resolver and addr initialization
TRequestStatus err;
TNameEntry entry;
timer.After(timerStatus, timeout*1000000);
resolver.GetByAddress(addr, entry, err);
while (true) {
User::WaitForRequest(err, timerStatus);
if (timerStatus != KRequestPending) { // timeout
resolver.Cancel();
User::WaitForRequest(err);
break;
}
if (err == KErrHostResNoMoreResults) {
timer.Cancel();
User::WaitForRequest(timerStatus);
break;
} else if (err != KErrNone) {
// OMITTED: error handling routine, not very important right now
}
// OMITTED: entry processing, adding to result QList
resolver.Next(entry, err);
}
timer.Close();
resolver.Close();
And this code kinda works. Even more, the way it works is functionally correct - the timeout works, the devices discovered so far are returned, and if the discovery ends earlier, then it exits without waiting for the timer. The problem is - it leaves a stray thread in the program. That means, when I exit my app, its process is still loaded in background, doing nothing. And I'm not the type of programmer who would be satisfied with a "fix" like making the "exit" button kill the process instead of exiting gracefully. Leaving a stray thread seems a too serious resource leak.
Is there any way to solve this? I don't mind rewriting everything from scratch, even using totally different APIs (as long as we're talking about native Symbian APIs), I just want it to work. I've read a bit about active objects, but it doesn't seem like what I need, since I just need this to work synchronously... In the case of bigger changes, I would appreciate more detailed explanations, since I'm new to Symbian C++, and I don't really need to master it - this little Bluetooth module is probably everything I'll need to write in it in foreseeable future.
Thanks in advance for any help! :)
The code you have looks ok to me. You've missed the usual pitfall of not consuming all the requests that you've issued. Assuming that you also cancel the timer and do a User::WaitForRequest(timerStatus) inside you're error handing condition, it should work.
I'm guessing that what you're worrying about is that there's no way for your main thread to request that this thread exit. You can do this roughly as follows:
Pass a pointer to a TRequestStatus into the thread when it is created by your main thread. Call this exitStatus.
When you do the User::WaitForRequest, also wait on exitStatus.
The main thread will do a bluetoothThread.RequestComplete(exitStatus, KErrCancel) when it wants the subthread to exit, where bluetoothThread is the RThread object that the main thread created.
in the subthread, when exitStatus is signalled, exit the loop to terminate the thread. You need to make sure you cancel and consume the timer and bluetooth requests.
the main thread should do a bluetoothThread.Logon and wait for the signal to wait for the bluetooth thread to exit.
There will likely be some more subtleties to deal correctly with all the error cases and so on.
I hope I'm not barking up the wrong tree altogether here...
The question is already answered, but... If you'd use active objects, I'd propose you to use nested active scheduler (class CActiveSchedulerWait). You could then pass it to your active objects (CPeriodic for timer and some other CActive for Bluetooth), and one of them would stop this nested scheduler in its RunL() method. More than this, with this approach your call becomes synchronous for the caller, and your thread will be gracefully closed after performing the call.
If you're interested in the solution, search for examples of CActiveSchedulerWait, or just ask me and I'll give your some code sample.

Simple C/C++ network I/O library

I have the following problem to solve. I want to make a number of requests to a number of "remote" servers (actually, a server farm we control). The connection is very simple. Send a line, and then read lines back. Because of the number of requests and the number of servers, I use pthreads, one for each request.
The naive approach, using blocking sockets, does not work; very occasionally, I'll have a thread stuck in 'connect'. I cannot use SIGALRM because I am using pthreads. I tried converting the code to O_NONBLOCK but this vastly complicated the code to read single lines.
What are my options? I'm looking for the simplest solution that allows the following pseudocode:
// Inside a pthread
try {
req = connect(host, port);
req.writeln("request command");
while (line = req.readline()) {
// Process line
}
} catch TimeoutError {
// Bitch and complain
}
My code is in C++ and I'm using Boost. A quick look at Boost ASIO shows me that it probably isn't the correct approach, but I could be wrong. ACE is far, far too heavy-weight to solve this problem.
Have you looked at libevent?
http://www.monkey.org/~provos/libevent/
It's totally different paradigm but the performance is so amazing.
memcached is built on top of libevent.
I saw the comments and i think you can use boost::asio with boost::asio::deadline_timer
Fragment of a code:
void restart_timer()
{
timer_.cancel();
timer_.expires_from_now(boost::posix_time::seconds(5));
timer_.async_wait(boost::bind(&handleTimeout,
MyClass::shared_from_this(), boost::asio::placeholders::error));
}
Where handleTimeout is a callback function, timer_ is boost::asio::deadline_timer
and MyClass is similar to
class Y: public enable_shared_from_this<Y>
{
public:
shared_ptr<Y> f()
{
return shared_from_this();
}
}
You can call restart_timer before connect ou read/write
More information about share_from_this()
You mentioned this happens 'very occasionally'. Your 'connect' side should have the fault tolerance and error handling you are looking for but you should also consider the stability of your servers, DNS, network connections, etc.
The underlying protocols are very sturdy and work very well, so if you are experiencing these kind of problems that often then it might be worth checking.
You may also be able close the socket from the other thread. That should cause the connect to fail.