I have recently written an asynchronous HTTP client in c++ for iOS/OSX. I was confused as to why headers were not arriving in my CFHTTPMessageRef response object until I realised that there was an object lurking as a property of the stream which contains the headers (after handling the kCFStreamEventBytesAvailable event).
So now I copy the headers from the property into my response object during the notification of the stream ending, before calling my event handler.
(code available on request, there's quite a lot of it)
The Apple documentation is characteristically silent on the subject and I was wondering if anyone out there knows the reason for this design decision by apple. I am keen to know in case I have a fundamental misunderstanding somewhere?
EDIT: the documentation says this:
After you schedule the request on a run loop, you will eventually get
a header complete callback. At this point, you can call
CFReadStreamCopyProperty to get the message response from the read
stream
However there seems to be no indication as to what the name or value of this event mask is.
EDIT:
Having done some experimenting, I see that the stream creates its own response object some time after sending the kCFStreamEventOpenCompleted notification and before the first kCFStreamEventHasBytesAvailable notification.
In the kCFStreamEventHasBytesAvailable event handler I can do this:
auto response = (CFHTTPMessageRef)CFReadStreamCopyProperty(readStream,
kCFStreamPropertyHTTPResponseHeader);
constexpr CFIndex bufferSize = 1024;
UInt8 buffer[bufferSize];
auto bytesRead = CFReadStreamRead(readStream, buffer, bufferSize);
if (bytesRead > 0) {
CFHTTPMessageAppendBytes(response, buffer, bytesRead);
}
CFRelease(response);
and the stream's response message object is indeed updated with new body data.
Now I am curious as to why this is not just done automatically by the stream?
Related
So, I've advertised for DCCA application in my extension via fd_disp_register and I can parse and prepare the response message and at the end sending them from my callback function with no issue.
This always works if the answer message is prepared inside of callback function. But what if i want to reply the request message outside of my callback function ?
So, I tried it with a sample code. I changed the callback function logic so there were no sending message from it and instead another thread tries to fetch some information and send out the response.
This absolutely failed, because as soon as callback returns (with 0), the next action gonna take place (according to disp_action value) which is not in my favor.
So, I'd like to ask what is your solution to handle such case, I mean sending out the response messages outside of the callback function ?
Thanks.
I'm not sure I've ever done this before, but looking at libfdproto.h...
enum disp_action {
DISP_ACT_CONT, /* The next handler should be called, unless *msg == NULL. */
DISP_ACT_SEND, /* The updated message must be sent. No further callback is called. */
DISP_ACT_ERROR /* An error must be created and sent as a reply -- not valid for callbacks, only for fd_msg_dispatch. */
};
...it sounds like you want to set *act = DISP_ACT_CONT; and *msg = NULL; (because you've taken ownership of the message).
Does that work?
good day,everyone.
i have some question about how bad is setting CompletableFuture in mesage from one actor to another and use get() to wait for it compleation. i have code example that i think is too complex to use in practice, but cant find any sutable arguments to advice to refactor it
code that send msg with future
private void onSomeSignal(SomeMsg smsg){
MessageToActor msg = new MessageToActor()
future = new CompletableFuture<>();
msg.setFuture(future);
actortRef.tell(msg, null);
response = future.get(2, TimeUnit.SECONDS);
/* do something with responce*/
}
code that complete future (in another actor)
private void onSomeSignal(MessageToActor msg){
response = responseService.getResponse();
msg.getFuture().complete(response);
}
is something wrong here , except that future.get() is blocking operation
Yes, doing that will come back and bite you: with this pattern you block one actor until some other actor responds, which means that if you use that elsewhere in your program there is a high risk of running into a deadlock (i.e. your whole program stops and cannot continue).
Instead of using a Future to send back a response, actors are made for sending messages. In “another actor”, you should use getContext().getSender().tell(response), and in the first actor you should handle that response as a normal message instead of the future.get() call.
I have the following source queue definition.
lazy val (processMessageSource, processMessageQueueFuture) =
peekMatValue(
Source
.queue[(ProcessMessageInputData, Promise[ProcessMessageOutputData])](5, OverflowStrategy.dropNew))
def peekMatValue[T, M](src: Source[T, M]): (Source[T, M], Future[M]) {
val p = Promise[M]
val s = src.mapMaterializedValue { m =>
p.trySuccess(m)
m
}
(s, p.future)
}
The Process Message Input Data Class is essentially an artifact that is created when a caller calls a web server endpoint, which is hooked upto this stream (i.e. the service endpoint's business logic puts messages into this queue). The Promise of process message out is something that is completed downstream in the sink of the application, and the web server then has an on complete callback on this future to return the response back.
There are also other sources of ingress into this stream.
Now the buffer may be backed up since the other source may overload the system, thereby triggering stream back pressure. The existing code just drops the new message. But I still want to complete the process message output promise to complete with an exception stating something like "Throttled".
Is there a mechanism to write a custom overflow strategy, or a post processing on the overflowed element that allows me to do this?
According to https://github.com/akka/akka/blob/master/akkastream/src/main/scala/akka/stream/impl/QueueSource.scala#L83
dropNew would work just fine. On clients end it would look like.
processMessageQueue.offer(in, pr).foreach { res =>
res match {
case Enqueued => // Code to handle case when successfully enqueued.
case Dropped => // Code to handle messages that are dropped since the buffier was overflowing.
}
}
As described in D-Bus documentation, all IPC calls considered as asynchronous. When Qt calls remote D-Bus object through QDBusAbstractInterface, there's QBusPendingCall<T> which is fully async and provide signalling when call ran to completion.
In my application design I want to implement async call on my object adaptor, but current Qt/DBus implementation assumes, that all method calls are blocking.
So, there's a question: is there proper way to implement handling D-Bus method call asynchronously?
This is explained pretty well in Declaring Slots in D-Bus Adaptors.
We do this by writing a slot that stores the request data in a persistent structure, indicating to the caller using QDBusMessage::setDelayedReply(true) that the response will be sent later.
struct RequestData
{
QString request;
QString processedData;
QDBusMessage reply;
};
QString processRequest(const QString &request, const QDBusMessage &message)
{
RequestData *data = new RequestData;
data->request = request;
message.setDelayedReply(true);
data->reply = message.createReply();
QDBusConnection::sessionBus().send(data->reply);
appendRequest(data);
return QString();
}
The use of QDBusConnection::sessionBus().send(data->reply) is needed to explicitly inform the caller that the response will be delayed. In this case, the return value is unimportant; we return an arbitrary value to satisfy the compiler.
When the request is processed and a reply is available, it should be sent using the QDBusMessage object that was obtained. In our example, the reply code could be something as follows:
void sendReply(RequestData *data)
{
// data->processedData has been initialized with the request's reply
QDBusMessage &reply = &data->reply;
// send the reply over D-Bus:
reply << data->processedData;
QDBusConnection::sessionBus().send(reply);
// dispose of the transaction data
delete data;
}
As can be seen in the example, when a delayed reply is in place, the return value(s) from the slot will be ignored by Qt D-Bus. They are used only to determine the slot's signature when communicating the adaptor's description to remote applications, or in case the code in the slot decides not to use a delayed reply.
The delayed reply itself is requested from Qt D-Bus by calling QDBusMessage::reply() on the original message. It then becomes the responsibility of the called code to eventually send a reply to the caller.
Warning: When a caller places a method call and waits for a reply, it will only wait for a limited amount of time. Slots intending to take a long time to complete should make that fact clear in documentation so that callers properly set higher timeouts.
i'm studying this source base. Basically this is an Anim server client for Symbian 3rd edition for the purpose of grabbing input events without consuming them in a reliable way.
If you spot this line of the server, here it is basically setting the RProperty value (apparently to an increasing counter); it seems no actual processing of the input is done.
inside this client line, the client is supposed to be receiving the notification data, but it only calls Attach.
my understanding is that Attach is only required to be called once, but is not clear in the client what event is triggered every time the server sets the RProperty
How (and where) is the client supposed to access the RProperty value?
After Attaching the client will somewhere Subscribe to the property where it passes a TRequestStatus reference. The server will signal the request status property via the kernel when the asynchronous event has happened (in your case the property was changed). If your example source code is implemented in the right way, you will find an active object (AO; CActive derived class) hanging around and the iStatus of this AO will be passed to the RProperty API. In this case the RunL function of the AO will be called when the property has been changed.
It is essential in Symbian to understand the active object framework and quite few people do it actually. Unfortunately I did not find a really good description online (they are explained quite well in Symbian OS Internals book) but this page at least gives you a quick example.
Example
In the ConstructL of your CMyActive subclass of CActive:
CKeyEventsClient* iClient;
RProperty iProperty;
// ...
void CMyActive::ConstructL()
{
RProcess myProcess;
TSecureId propertyCategory = myProcess.SecureId();
// avoid interference with other properties by defining the category
// as a secure ID of your process (perhaps it's the only allowed value)
TUint propertyKey = 1; // whatever you want
iClient = CKeyEventsClient::NewL(propertyCategory, propertyKey, ...);
iClient->OpenNotificationPropertyL(&iProperty);
// ...
CActiveScheduler::Add(this);
iProperty.Subscribe(iStatus);
SetActive();
}
Your RunL will be called when the property has been changed:
void CMyActive::RunL()
{
if (iStatus.Int() != KErrCancel) User::LeaveIfError(iStatus.Int());
// forward the error to RunError
// "To ensure that the subscriber does not miss updates, it should
// re-issue a subscription request before retrieving the current value
// and acting on it." (from docs)
iProperty.Subscribe(iStatus);
TInt value; // this type is passed to RProperty::Define() in the client
TInt err = iProperty.Get(value);
if (err != KErrNotFound) User::LeaveIfError(err);
SetActive();
}