How to get the module connected to a gate? - c++

Inside a module, I can get a cGate pointer calling the method:
const cGate* cModule::gate ( const char * gatename, int index = -1)
But once obtained the cGate pointer, I don't see a way to get the associate module that is connected (in output) to the gate. I don't see it in the cChannel class either. Is there a way?

Check the cGate::getPathStartGate() and cGate::getPathEndGate() methods. Depending on the direction of the connection those will give you the endpoint gates (it will follow the connections even across module boundaries until it finds a simple module at the other side of the connection chain).
(cGate::getNextGate() and cGate::getPreviousGate() gives only the next/prev gate on the chain)
Once you have the cGate object from the other side, you can get the module using cGate::getOwnerModule()

Related

findModuleByPath("host") returns nullptr in OMNeT++

In OMNeT++ I'm working on example aloha. I try adding acknowledge message sent from server to the node. So, I have defined cModule *host and added host = findModuleByPath("host"); line to the initialize() method in Server.cc but it returns nullptr and I have seen the getModuleByPath() method also does the same work but throws and exception instead of returning a nullptr.
It cannot find the host module even though I have defined it. I believe I am missing something but I don't know what. Is there a good example of network (with multiple nodes) that also sends acknowledgement message?
There are several issues in using cModule *host = findModuleByPath("host") in initialize() of server.
According to 4.11.4 Finding Modules by Path that command leads to looking for a submodule named host inside the current module, i.e. in server. Of course, server does not contain host, so it results in returning nullptr. To find a sibling module called host one should use
cModule *host = findModuleByPath("^.host").
In Aloha there is no single host module, but a vector of modules. It means, that first host has the name host[0], the second - host[1] etc. Therefore, it would be possible to use:
cModule *host = findModuleByPath("^.host[2]")
Another way is the following command:
cModule *host = getParentModule()->getSubmodule("host", 2)
Be aware that initialize() is called sequentially in modules in the network and the order of choosing the next module is not guaranteed by the simulation environment, e.g. initialize() was called in host[1] but not yet in server.
Multi-Stage Initialization may be used to be sure that one stage of initialize() has been performed in all modules.

(OMNeT++) Where do packets go?

I'm trying to do a project described also here: PacketQueue is 0
I've modified the UdpBasicApp.cc to suit my needs, and in the handleMessage() function I added a piece of code to retrieve the length of the ethernet queue of a router. However the value returned is always 0.
My .ned file regarding the queue of routers is this:
**.router*.eth[*].mac.queue.typename = "DropTailQueue"
**.router*.eth[*].mac.queue.packetCapacity = 51
The code added in the UdpBasicApp.cc file is this:
cModule *mod = getModuleByPath("router3.eth[*].mac.queue.");
queueing::PacketQueue *queue = check_and_cast<queueing::PacketQueue*>(mod);
int c = queue->getNumPackets();
So my question is this: is this the right way to create a queue in a router linked to other nodes with an ethernet link?
My doubt is that maybe packets don't pass through the specified interface, i.e. I've set the ini parameters for the wrong queue.
You are not creating that queue. It was already instantiated by the OMNeT++ kernel. You are just getting a reference to an already existing module with the getModuleByPath() call.
The router3.eth[*].mac.queue. module path is rather suspicious in that call. It is hard-coded in all of your application to get the queue length from router3 even if the app is installed in router1. i.e. you are trying to look at the queue length in a completely different node. Then, the eth[*] is wrong. As a router obviously contains more than one ethernet interface (otherwise it would not be a router), you must explicitly specify which interface you want to sepcify. You must not specify patterns in module path (i.e. eth[0] or something like that, with an exact index must be specified). And at this point, you have to answer the question which ethernet interface I'm interested in, and specify that index. Finally the . end the end is also invalid, so I believe, your code never executes, otherwise the check_and_cast part would have raised an error already.
If you wanted to reach the first enthern interface from an UDP app in the same node, you would use relative path, something like this: ^.eth[0].mac.queue
Finally, if you are unsure whether your model works correctly, why not start the model with Qtenv, and check whether the given module receives any packet? Like,, drill down in the model until the given queue is opened as a simple module (i.e. you see the empty inside of the queue module) and then tap the run/fast run until next event in this module. If the simulation does not stop, then that module indeed did not received any packets and your configuration is wrong.

DNS-SD on Windows using MFC

I have an application built using MFC that I need to add Bonjour/Zeroconf service discovery to. I've had a bit of trouble figuring out how best to do it, but I've settled on using the DLL stub provided in the mDNSresponder source code and linking my application to the static lib generated by that (which in turn uses the system dnssd.dll).
However, I'm still having problems as the callbacks don't always seem to be being called so my device discovery stalls. What confuses me is that it all works absolutely fine under OSX, using the OSX dns-sd terminal service and under Windows using the dns-sd command line service. On that basis, I'm ruling out the client service as being the problem and trying to figure out what's wrong with my Windows code.
I'm basically calling DNSBrowseService(), then in that callback calling DNSServiceResolve(), then finally calling DNSServiceGetAddrInfo() to get the IP address of the device so I can connect to it.
All of these calls are based on using WSAAsyncSelect like this :
DNSServiceErrorType err = DNSServiceResolve(&client,kDNSServiceFlagsWakeOnResolve,
interfaceIndex,
serviceName,
regtype,
replyDomain,
ResolveInstance,
context);
if(err == 0)
{
err = WSAAsyncSelect((SOCKET) DNSServiceRefSockFD(client), p->m_hWnd, MESSAGE_HANDLE_MDNS_EVENT, FD_READ|FD_CLOSE);
}
But sometimes the callback just never gets called even though the service is there and using the command line will confirm that.
I'm totally stumped as to why this isn't 100% reliable, but it is if I use the same DLL from the command line. My only possible explanation is that the DNSServiceResolve function tries to call the callback function before the WSAAsyncSelect has registered the handling message for the socket, but I can't see any way around this.
I've spent ages on this and am now completely out of ideas. Any suggestions would be welcome, even if they're "that's a really dumb way to do it, why aren't you doing X, Y, Z".
I call DNSServiceBrowse, with a "shared connection" (see dns_sd.h for documentation) as in:
DNSServiceCreateConnection(&ServiceRef);
// Need to copy the main ref to another variable.
DNSServiceRef BrowseServiceRef = ServiceRef;
DNSServiceBrowse(&BrowseServiceRef, // Receives reference to Bonjour browser object.
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
kDNSServiceInterfaceIndexAny, // Browse on all network interfaces.
"_servicename._tcp", // Browse for service types.
NULL, // Browse on the default domain (e.g. local.).
BrowserCallBack, // Callback function when Bonjour events occur.
this); // Callback context.
This is inside a main run method of a thread class called ServiceDiscovery. ServiceRef is a member of ServiceDiscovery.
Then immediately following the above code, I have a main event loop like the following:
while (true)
{
err = DNSServiceProcessResult(ServiceRef);
if (err != kDNSServiceErr_NoError)
{
DNSServiceRefDeallocate(BrowseServiceRef);
DNSServiceRefDeallocate(ServiceRef);
ServiceRef = nullptr;
}
}
Then, in BrowserCallback you have to setup the resolve request:
void DNSSD_API ServiceDiscovery::BrowserCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* inName,
const char* inType,
const char* inDomain,
void* inContext)
{
(void) inServiceRef; // Unused
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
...
// Pass a copy of the main DNSServiceRef (just a pointer). We don't
// hang to the local copy since it's passed in the resolve callback,
// where we deallocate it.
DNSServiceRef resolveServiceRef = sd->ServiceRef;
DNSServiceErrorType err =
DNSServiceResolve(&resolveServiceRef,
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
inIFI,
inName,
inType,
inDomain,
ResolveCallBack,
sd);
Then in ResolveCallback you should have everything you need.
// Callback for Bonjour resolve events.
void DNSSD_API ServiceDiscovery::ResolveCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* fullname,
const char* hosttarget,
uint16_t port, /* In network byte order */
uint16_t txtLen,
const unsigned char* txtRecord,
void* inContext)
{
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
assert(sd);
// Save off the connection info, get TXT records, etc.
...
// Deallocate the DNSServiceRef.
DNSServiceRefDeallocate(inServiceRef);
}
hosttarget and port contain your connection info, and any text records can be obtained using the DNS-SD API (e.g. TXTRecordGetCount and TXTRecordGetItemAtIndex).
With the shared connection references, you have to deallocate each one based on (or copied from) the parent reference when you are done with them. I think the DNS-SD API does some reference counting (and parent/child relationship) when you pass copies of a shared reference to one of their functions. Again, see the documentation for details.
I tried not using shared connections at first, and I was just passing down ServiceRef, causing it to be overwritten in the callbacks and my main loop to get confused. I imagine if you don't use shared connections, you need to maintain a list of references that need further processing (and process each one), then destroy them when you're done. The shared connection approach seemed much easier.

understanding RProperty IPC communication

i'm studying this source base. Basically this is an Anim server client for Symbian 3rd edition for the purpose of grabbing input events without consuming them in a reliable way.
If you spot this line of the server, here it is basically setting the RProperty value (apparently to an increasing counter); it seems no actual processing of the input is done.
inside this client line, the client is supposed to be receiving the notification data, but it only calls Attach.
my understanding is that Attach is only required to be called once, but is not clear in the client what event is triggered every time the server sets the RProperty
How (and where) is the client supposed to access the RProperty value?
After Attaching the client will somewhere Subscribe to the property where it passes a TRequestStatus reference. The server will signal the request status property via the kernel when the asynchronous event has happened (in your case the property was changed). If your example source code is implemented in the right way, you will find an active object (AO; CActive derived class) hanging around and the iStatus of this AO will be passed to the RProperty API. In this case the RunL function of the AO will be called when the property has been changed.
It is essential in Symbian to understand the active object framework and quite few people do it actually. Unfortunately I did not find a really good description online (they are explained quite well in Symbian OS Internals book) but this page at least gives you a quick example.
Example
In the ConstructL of your CMyActive subclass of CActive:
CKeyEventsClient* iClient;
RProperty iProperty;
// ...
void CMyActive::ConstructL()
{
RProcess myProcess;
TSecureId propertyCategory = myProcess.SecureId();
// avoid interference with other properties by defining the category
// as a secure ID of your process (perhaps it's the only allowed value)
TUint propertyKey = 1; // whatever you want
iClient = CKeyEventsClient::NewL(propertyCategory, propertyKey, ...);
iClient->OpenNotificationPropertyL(&iProperty);
// ...
CActiveScheduler::Add(this);
iProperty.Subscribe(iStatus);
SetActive();
}
Your RunL will be called when the property has been changed:
void CMyActive::RunL()
{
if (iStatus.Int() != KErrCancel) User::LeaveIfError(iStatus.Int());
// forward the error to RunError
// "To ensure that the subscriber does not miss updates, it should
// re-issue a subscription request before retrieving the current value
// and acting on it." (from docs)
iProperty.Subscribe(iStatus);
TInt value; // this type is passed to RProperty::Define() in the client
TInt err = iProperty.Get(value);
if (err != KErrNotFound) User::LeaveIfError(err);
SetActive();
}

simulate socket errors

How to simulate socket errors? (sometimes server or client disconnects because of some socket error and it is impossible to reproduce.)
I was looking for a tool to do this, but I can't find one.
Does anyone know either of a tool or has a code example on how to do this? (C# or C/C++)
Add a wrapper layer to the APIs you're using to access the sockets and have them fail rand() % 100 > x percent of the time.
I had exactly the same problem this summer.
I had a custom Socket class and wanted to test what would happen if read or write threw an exception. I really wanted to mimic the Java mocking frameworks, and I did it like this:
I inherited the Socket class into a FakeSocket class, and created something called a SocketExpectation. Then, in the unit tests, I created fake sockets, set up the expectations and then passed that fake socket to the code I wanted to test.
The FakeSocket had these methods (stripped of unneeded details):
uint32_t write(buffer, length); // calls check
uint32_t read(buffer, length); // calls check
bool matches();
void expect(expectation);
uint32_t check(CallType, buffer, length) const;
They're all pretty straight forward. check checks the arguments against the current expectation and if everything is according to plan, proceeds to perform the SocketExpectation requirement.
The SocketExpectation has this outline (also stripped):
typedef enum { write, read } CallType;
SocketExpectation(CallType type);
SocketExpectation &with_arguments(void *a1, uint32_t a2); // expects these args
SocketExpectation &will_return(uint32_t value);
SocketExpectation &will_throw(const char * e); // test error handling
bool matches();
I added more methods as I needed them. I would create it like this, then pass the fake socket to the relevant method:
fake_socket = FakeSocket();
fake_socket.expect(SocketExpectation(write).with_arguments(....).will_return(...));
fake_socket.expect(SocketExpectation(read).with_arguments(...).will_throw("something"));
My socket code unit tests are probably better described as integration tests as I drive the code under test to connect to a 'mock' remote peer. Since the remote peer is under the control of the test (it's simply a simple client or server) I can have the test cause the remote peer to disrupt the connection in various ways and then ensure that the code under test reacts as expected. It takes a little work to set up, but once you have all the pieces in place it makes it pretty trivial to test most situations.
So, I guess, my suggestion is that rather than attempting to simulate the situations that you're encountering you should understand them and then reliably generate them.