understanding RProperty IPC communication - c++

i'm studying this source base. Basically this is an Anim server client for Symbian 3rd edition for the purpose of grabbing input events without consuming them in a reliable way.
If you spot this line of the server, here it is basically setting the RProperty value (apparently to an increasing counter); it seems no actual processing of the input is done.
inside this client line, the client is supposed to be receiving the notification data, but it only calls Attach.
my understanding is that Attach is only required to be called once, but is not clear in the client what event is triggered every time the server sets the RProperty
How (and where) is the client supposed to access the RProperty value?

After Attaching the client will somewhere Subscribe to the property where it passes a TRequestStatus reference. The server will signal the request status property via the kernel when the asynchronous event has happened (in your case the property was changed). If your example source code is implemented in the right way, you will find an active object (AO; CActive derived class) hanging around and the iStatus of this AO will be passed to the RProperty API. In this case the RunL function of the AO will be called when the property has been changed.
It is essential in Symbian to understand the active object framework and quite few people do it actually. Unfortunately I did not find a really good description online (they are explained quite well in Symbian OS Internals book) but this page at least gives you a quick example.
Example
In the ConstructL of your CMyActive subclass of CActive:
CKeyEventsClient* iClient;
RProperty iProperty;
// ...
void CMyActive::ConstructL()
{
RProcess myProcess;
TSecureId propertyCategory = myProcess.SecureId();
// avoid interference with other properties by defining the category
// as a secure ID of your process (perhaps it's the only allowed value)
TUint propertyKey = 1; // whatever you want
iClient = CKeyEventsClient::NewL(propertyCategory, propertyKey, ...);
iClient->OpenNotificationPropertyL(&iProperty);
// ...
CActiveScheduler::Add(this);
iProperty.Subscribe(iStatus);
SetActive();
}
Your RunL will be called when the property has been changed:
void CMyActive::RunL()
{
if (iStatus.Int() != KErrCancel) User::LeaveIfError(iStatus.Int());
// forward the error to RunError
// "To ensure that the subscriber does not miss updates, it should
// re-issue a subscription request before retrieving the current value
// and acting on it." (from docs)
iProperty.Subscribe(iStatus);
TInt value; // this type is passed to RProperty::Define() in the client
TInt err = iProperty.Get(value);
if (err != KErrNotFound) User::LeaveIfError(err);
SetActive();
}

Related

Using a lock in C++ across multiple tasks

I am not really seeking code examples, but I'm hoping someone can review my program design and provide feedback. I am trying to figure out how do I ensure I have one instance of my "workflow" running at a time.
I am working in C++.
This is my workflow:
I read rows off of a Postgres database.
If the table has any records, I want to do these instructions:
Read the records and transform them to JSON
Send the JSON document to a remote Web service
Parse the response from the service. The service tells me which records were saved or not saved, based on their primary key.
I delete the successfully saved records
I log the unsuccessful records (there's another process that consumes the logs and so my work is done).
I want to perform all of this threads using a separate thread (or "task", whatever higher-level abstraction is available in C++), and I want to make sure that if my function for [1] gets called multiple times, the additional calls basically get "dropped" if step 1 is already in flight.
In C++, I believe I can use a flag and a mutex. I use a something like std::lock_guard<std::mutex> at the top of my method. Then the next line checks for a flag.
// MyWorkflow.cpp
std::mutex myMutex;
int inFlight = 0;
void process() {
std::lock_guard<std::mutex> guard(myMutex);
if (inflight) {
return;
}
inflight = 1;
std::vector<Widget> widgets = readFromMyTable();
std::string json = getJson(&widgets);
... // Send the json to the remote service and handle the response
}
Okay, let me explain my confusion. I want to use Curl to perform the HTTP request. But Curl works asynchronously. And so if I make the asynchronous HTTP call via Curl, my update function will just return and myMutex will be released, right?
I think in my asynchronous response handler, I need to call a second function that's in MyWorkflow.cpp
void markCompletion() {
std::lock_guard<std::mutex> guard(myMutex);
inFlight = 0; // Reset the inflight flag here
}
Is this the right approach? I am worried that if an exception is thrown anywhere before I call markCompletion(), I will block all future callers. I think I need to ensure I have proper exception handling and always call markCompletion().
I am terribly sorry for asking such a noob question, but I really want to learn to do this the right way.

Invalid address specified to RtlValidateHeap in cross-dll application when using QTcpSocket

Background:
Sorry this is such a complex problem but it is driving me nuts. Finding a solution may help others who need a compartmentalized application.
I have a Qt program that is VERY compartmentalized because it is meant to host plugins and be used in a variety of situations, sometimes as a server, sometimes as a client, sometimes as both. The plugins that are loaded are login dependent. (Because the access defined for the user is not necessarily up to the user and the user's access to data and functionality may be limited).
The application relies on a core DLL library (specific to the application) which is used by the main exe, the client, the server, and all plugin dlls. Client and server functionality are also in separate dlls. I am new to this style of programming so that may be leading to my issue.
My Problem:
I have a class called "BidirectionalTcpConnection" that is defined in the core DLL which is to be used by the executable, the client dll, and the server dll. It is a class that keeps track of data that is passed back and forth over a QTcpSocket. I wrote the class to avoid THE SAME problem as I am having now except that the problem originally occurred while using the QTcpSocket.ReadAll() function AND in the current situation. (If I tried reading all but the last byte, and then read the last byte using the QTcpSocket.peek(...) function it would work fine).
My new class successfully reads from and writes to the socket without error but when I try and close or abort the socket (this happened with my earlier workaround too...), I get the same error I was getting when I tried to read it (only on the last byte). I get an Invalid address specified to RtlValidateHeap. Basically it throws a "User Breakpoint" in dbgheap.c
My Hypothesis (What I believe is wrong):
The dbgheap.c documents that it is checking to see if the address is valid and that it resides on the current heap.
It is possible that the need for compartmentalizing my application may be leading to this issue. The data being supplied to the socket for sending was originally being allocated in the executable's heap along with the instance of BidirectionalTcpConnection. (I am trying to send the login and receive the permissions for application access). The socket itself however is being allocated in the core heap (assuming that the dll has a separate heap from the exe for internal data). I tried avoiding this by doing a deep copy of each piece of data that is to be sent over the socket within the core dll code. But that hasn't solved the problem. Presumably because the BidirectionalTcpConnection is still being allocated on a separate heap from the socket itself.
My question(s) for anyone who can help:
Is the assumption in my hypothesis correct?
Do I need to allocate the socket and the connection on the same heap? How do I
overcome this issue?
Also... if you look at the code, will I need to delete the returned
string that needs to be processed by the executable within the core
dll in order to avoid the same issue?
If you guys need some code... I have supplied what I think is necessary. I can supply more upon request.
Some Code:
For starters.. here is some basic code to show the way things are allocated. The login is performed in main before the main interface is shown. w is the main interface window class instance. Here is the code that starts the process leading to the crash:
while (loginFailed)
{
splash->showLogin();
while (splash->isWaitingOnLogin())
a.processEvents();
QString username(*splash->getUserName());
QString password(*splash->getPassword());
// LATER: encrypt login for sending
loginFailed = w.loginFailed(username, password, a);
}
Here is the code that instantiates the BidirectionalTcpConnection on the executable's stack and sends the login data. This code is inside a few separate private methods of the Qt main window class.
// method A
// processes Qstring parameters into sendable data...
// then calls method B
// which creates the instance of *BidirectionalTcpConnection*
...
if (getServerAddress() == QString("LOCAL"))
mTcpConnection = new BidirectionalTcpConnection(getHostAddressIn()->toString(),
(quint16)ServerPorts::loginRequest, (long)15, this);
else
mTcpConnection = new BidirectionalTcpConnection(*getServerAddress(),
(quint16)ServerPorts::loginRequest, (long)15, this);
...
// back to method A...
mTcpConnection->sendBinaryData(*dataStream);
mTcpConnection->flushMessages(); // sends the data across the socket
...
// waits for response and then parses user data when it comes
while (waitForResponse)
{
if (mTcpConnection->hasBufferedMessages())
{
QString* loginXML = loginConnection->getNextMessageAsText();
// parse the xml
if (parseLogin(*loginXML))
{
waitForResponse = false;
}
...
}
}
...
// calls method that closes the socket which causes crash
mTcpConnection->abortConnection(); // crash occurs inside this method
delete mTcpConnection;
mTcpConnection = NULL;
Here is the relevant BidirectionalTcpConnection code in order of use. Note, this code is located in the core dll so presumably it is allocating data on a separate stack...
BidirectionalTcpConnection::BidirectionalTcpConnection(const QString& destination,
quint16 port, long timeOutInterval, TimeUnit unit, QObject* parent) :
QObject(parent),
mSocket(parent),
...
{ }
void BidirectionalTcpConnection::sendBinaryData(QByteArray& data)
{
// notice I try and avoid different heaps where I can by copying the data...
mOutgoingMessageQueue.enqueue(new QByteArray(data)); // member is of QQueue type
}
QString* BidirectionalTcpConnection::getNextMessageAsText()
// NOTE: somehow I need to delete the returned pointer to prevent memory leak
{
if (mIncomingMessageQueue.size() == 0)
return NULL;
else
{
QByteArray* data = mIncomingMessageQueue.dequeue();
QString* stringData = new QString(*data);
delete data;
return stringData;
}
}
void BidirectionalTcpConnection::abortConnection()
{
mSocket.abort(); // **THIS CAUSES ERROR/CRASH**
clearQueues();
mIsConnected = false;
}

DNS-SD on Windows using MFC

I have an application built using MFC that I need to add Bonjour/Zeroconf service discovery to. I've had a bit of trouble figuring out how best to do it, but I've settled on using the DLL stub provided in the mDNSresponder source code and linking my application to the static lib generated by that (which in turn uses the system dnssd.dll).
However, I'm still having problems as the callbacks don't always seem to be being called so my device discovery stalls. What confuses me is that it all works absolutely fine under OSX, using the OSX dns-sd terminal service and under Windows using the dns-sd command line service. On that basis, I'm ruling out the client service as being the problem and trying to figure out what's wrong with my Windows code.
I'm basically calling DNSBrowseService(), then in that callback calling DNSServiceResolve(), then finally calling DNSServiceGetAddrInfo() to get the IP address of the device so I can connect to it.
All of these calls are based on using WSAAsyncSelect like this :
DNSServiceErrorType err = DNSServiceResolve(&client,kDNSServiceFlagsWakeOnResolve,
interfaceIndex,
serviceName,
regtype,
replyDomain,
ResolveInstance,
context);
if(err == 0)
{
err = WSAAsyncSelect((SOCKET) DNSServiceRefSockFD(client), p->m_hWnd, MESSAGE_HANDLE_MDNS_EVENT, FD_READ|FD_CLOSE);
}
But sometimes the callback just never gets called even though the service is there and using the command line will confirm that.
I'm totally stumped as to why this isn't 100% reliable, but it is if I use the same DLL from the command line. My only possible explanation is that the DNSServiceResolve function tries to call the callback function before the WSAAsyncSelect has registered the handling message for the socket, but I can't see any way around this.
I've spent ages on this and am now completely out of ideas. Any suggestions would be welcome, even if they're "that's a really dumb way to do it, why aren't you doing X, Y, Z".
I call DNSServiceBrowse, with a "shared connection" (see dns_sd.h for documentation) as in:
DNSServiceCreateConnection(&ServiceRef);
// Need to copy the main ref to another variable.
DNSServiceRef BrowseServiceRef = ServiceRef;
DNSServiceBrowse(&BrowseServiceRef, // Receives reference to Bonjour browser object.
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
kDNSServiceInterfaceIndexAny, // Browse on all network interfaces.
"_servicename._tcp", // Browse for service types.
NULL, // Browse on the default domain (e.g. local.).
BrowserCallBack, // Callback function when Bonjour events occur.
this); // Callback context.
This is inside a main run method of a thread class called ServiceDiscovery. ServiceRef is a member of ServiceDiscovery.
Then immediately following the above code, I have a main event loop like the following:
while (true)
{
err = DNSServiceProcessResult(ServiceRef);
if (err != kDNSServiceErr_NoError)
{
DNSServiceRefDeallocate(BrowseServiceRef);
DNSServiceRefDeallocate(ServiceRef);
ServiceRef = nullptr;
}
}
Then, in BrowserCallback you have to setup the resolve request:
void DNSSD_API ServiceDiscovery::BrowserCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* inName,
const char* inType,
const char* inDomain,
void* inContext)
{
(void) inServiceRef; // Unused
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
...
// Pass a copy of the main DNSServiceRef (just a pointer). We don't
// hang to the local copy since it's passed in the resolve callback,
// where we deallocate it.
DNSServiceRef resolveServiceRef = sd->ServiceRef;
DNSServiceErrorType err =
DNSServiceResolve(&resolveServiceRef,
kDNSServiceFlagsShareConnection, // Indicate it's a shared connection.
inIFI,
inName,
inType,
inDomain,
ResolveCallBack,
sd);
Then in ResolveCallback you should have everything you need.
// Callback for Bonjour resolve events.
void DNSSD_API ServiceDiscovery::ResolveCallBack(DNSServiceRef inServiceRef,
DNSServiceFlags inFlags,
uint32_t inIFI,
DNSServiceErrorType inError,
const char* fullname,
const char* hosttarget,
uint16_t port, /* In network byte order */
uint16_t txtLen,
const unsigned char* txtRecord,
void* inContext)
{
ServiceDiscovery* sd = (ServiceDiscovery*)inContext;
assert(sd);
// Save off the connection info, get TXT records, etc.
...
// Deallocate the DNSServiceRef.
DNSServiceRefDeallocate(inServiceRef);
}
hosttarget and port contain your connection info, and any text records can be obtained using the DNS-SD API (e.g. TXTRecordGetCount and TXTRecordGetItemAtIndex).
With the shared connection references, you have to deallocate each one based on (or copied from) the parent reference when you are done with them. I think the DNS-SD API does some reference counting (and parent/child relationship) when you pass copies of a shared reference to one of their functions. Again, see the documentation for details.
I tried not using shared connections at first, and I was just passing down ServiceRef, causing it to be overwritten in the callbacks and my main loop to get confused. I imagine if you don't use shared connections, you need to maintain a list of references that need further processing (and process each one), then destroy them when you're done. The shared connection approach seemed much easier.

How to display remote email message?

I have been using this code to display IMAP4 messages:
void DisplayMessageL( const TMsvId &aId )
{
// 1. construct the client MTM
TMsvEntry indexEntry;
TMsvId serviceId;
User::LeaveIfError( iMsvSession->GetEntry(aId, serviceId, indexEntry));
CBaseMtm* mtm = iClientReg->NewMtmL(indexEntry.iMtm);
CleanupStack::PushL(mtm);
// 2. construct the user interface MTM
CBaseMtmUi* uiMtm = iUiReg->NewMtmUiL(*mtm);
CleanupStack::PushL(uiMtm);
// 3. display the message
uiMtm->BaseMtm().SwitchCurrentEntryL(indexEntry.Id());
CMsvOperationWait* waiter=CMsvOperationWait::NewLC();
waiter->Start(); //we use synchronous waiter
CMsvOperation* op = uiMtm->OpenL(waiter->iStatus);
CleanupStack::PushL(op);
CActiveScheduler::Start();
// 4. cleanup for example even members
CleanupStack::PopAndDestroy(4); // op,waiter, mtm, uimtm
}
However, in case when user attempts to download a remote message (i.e. one of the emails previously not retrieved from the mail server), and then cancels the request, my code remains blocked, and it never receives information that the action was canceled.
My question is:
what is the workaround for the above, so the application is not stuck?
can anyone provide a working example for asynchronous call for opening remote messages which do not panic and crash the application?
Asynchronous calls for POP3, SMTP and local IMAP4 messages work perfectly, but remote IMAP4 messages create this issue.
I am testing these examples for S60 5th edition.
Thank you all in advance.
First of all, I would retry removing CMsvOperationWait and deal with the open request asynchronously - i.e. have an active object waiting for the CMsvOperation to complete.
CMsvOperationWait is nothing more than a convenience to make an asynch operation appear synchronous and my suspicion is that this is culprit - in the case of download->show message, there are two asynch operations chained.

OpenDDS and notification of publisher presence

Problem: How can I get liveliness notifications of booth publisher connect and disconnect?
Background:
I'm working with a OpenDDS implementation where I have a publisher and a subscriber of a data type (dt), using the same topic, located on separate computers.
The reader on the subscriber side has overridden implementations of on_data_available(...)and on_liveliness_changed(...). My subscriber is started first, resulting in a callback to on_liveliness_changed(...) which says that there are no writers available. When the publisher is started I get a new callback to telling me there is a writer available, and when the publisher publishes, on_data_available(...) is called. So far everything is working as expected.
The writer on the publisher has a overridden implementation of on_publication_matched(...). When starting the publisher, on_publication_matched(...) gets called since we already have a subscriber started.
The problem is that when the publisher disconnects, I get no callback to on_liveliness_changed(...) on the reader side, nor do I get a new callback when the publisher is started again.
I have tried to change the readerQos by setting the readerQos.liveliness.lease_duration.
But the result is that the on_data_available(...) never gets called, and the only callback to on_liveliness_changed(...) is at startup, telling me that there are no publishers.
DDS::DataReaderQos readerQos;
DDS::StatusKind mask = DDS::DATA_AVAILABLE_STATUS | DDS::LIVELINESS_CHANGED_STATUS | DDS::LIVELINESS_LOST_STATUS ;
m_subscriber->get_default_datareader_qos( readerQos );
DDS::Duration_t t = { 3, 0 };
readerQos.liveliness.lease_duration = t;
m_binary_Reader = static_cast<binary::binary_tdatareader( m_subscriber->create_datareader(m_Sender_Topic,readerQos,this, mask, 0, false) );
/Kristofer
Ok, guess there aren't many DDS users here.
After some research I found that a reader/writer match occurs only if this compatibility criterion is satisfied: offered lease_duration <= requested lease_duration
The solution was to set the writer QoS to offer the same liveliness. There is probably a way of checking if the requested reader QoS could be supplied by the corresponding writer, and if not, use a "lower" QoS, all thou I haven't tried it yet.
In the on_liveliness_changed callback method I simply evaluated the alive_count in the from the LivelinessChangedStatus.
/Kristofer