I register an RCP procedure using the Thruway client and call it using the Autohahn client.
The issue I faced is that when the callee returns an empty array from the procedure return array(); the caller receives the null value.
When the callee returns a single-element array return array(['foo' => 'bar', 'baz' => 'quux']); the caller receives that object which is not wrapped in an array.
And only returning multiple objects in an array works as expected: the caller receives an array of objects.
This is absolutely inconvenient and unintuitive: I have to check whether the response is defined, whether it's an array or not... I wish the caller to receive what I actually send from the callee: an empty array, an array of one element and an array of multiple elements. I send an array - I want the client to get an array.
The question is: how to fix this behavior? I'm not even sure which of the two clients is misconfigured. Or maybe this is configurable on router (I use Crossbar as a router). Or maybe this is an expected implementation of the protocol (which would be just awful).
Just tested this with two Autobahn clients (Autobahn|JS, but behavior in this respect is identical across Autobahns) and Crossbar.io. This results in an empty array, an array with one element and an array with multiple elements being received.
This is spec-conformant behavior: The caller receives what is sent from the callee. The only modification the router performs on the payload is to change the serialization if needed.
Related
As simplified case: I need to transfer a VARIANT to another process over the existing COM interface. I currently use the MIDL-generated marshaller.
The actual transfer is for many values, is part of a time-critical process, and may involve large strings or safearray's (a few MB), thus number of copies made seems relevant.
Since the receiver needs to "keep" the data beyond the function call, at least one copy needs to be made by the marshaler. All signatures I can think of invlove two copies, however:
SetValue([in] VARIANT)
GetValue([out] VARIANT *) // called by receiver
In both cases, in my understanding the marshaller makes a cross-process copy that does get destroyed by the marshaller. Since I need to keep the data in the receiver, I need to make a second copy.
I considered "detaching" the data at the receiver:
SetValue([in, out] VARIANT *)
// receiver detaches value and sets to VT_EMPTY for return
But this would also destroy the source.
Q1: Is it possible to get the MIDL-generated marshaling code to do only one copy?
Q2: Would this be possible with a custom marshaller, and at what cost? (My first looks into that were extremly discouraging)
I am pretty mouch bound to using SAFEARRAY and/or other VARIANT/PROPVARIANT types, and to transfer the whole array.
[edit]
Both sides use C++, the interfaces are IUnknown-based, and it needs to work cross-process on a single machine, in the same context.
You don't say so explicitly, but it seems the problem you are seeking to solve is speed issues. In any case, consider using a profiler to identify the bottleneck if you haven't already done so.
I very much doubt in this case that it is the copying which is taking the time. Rather, it is likely to be the context-switching between processes involved, as you are getting the values one at a time. This means that for each value you retrieve, you have to switch processes to the target of the call, then switch back again.
You could speed this up enormously be making your design less "chatty" when setting or getting multiple values.
Something like this:
SetMultipleValues(
[in] SAFEARRAY(BSTR)* asNames,
[in] SAFEARRAY(VARIANT)* avValues
)
GetMultipleValues(
[in] SAFEARRAY(BSTR)* asNames,
[out,retval] SAFEARRAY(VARIANT)* pavValues
)
I.e. when calling GetMultipleValues, pass in an array of 10 names, and receive an array of 10 values in the same order as the names passed in, (or VT_EMPTY if the value does not exist).
If I keep sending data to a receiver is it possible for the data sent to overlap such that they accumulate in the buffer and so the next read to the buffer reads also the data of another sent data?
I'm using Qt and readAll() to receive data and parse it. This data has some structure in it so I can know if the data is already complete or if it is valid data at all but I'm worried that other data will overlap with others when I call readAll() and so would invalidate this suppose-to-be valid data.
If it can happen, how do I prevent/control it? Or is that something the OS/API worries about instead? I'm worried partly because of how the method is called. lol
TCP is a stream based connection, not a packet based connection, so you may not assume that what is sent in one time will also be received in one time. You still need some kind of protocol to packetize your stream.
For sending strings, you could use the nul-character as separator or you could begin with a header which contains a magic and a length.
According to http://qt-project.org/doc/qt-4.8/qiodevice.html#readAll this function snarfs all the data and returns it as an array. I don't see how the API raises concerns about overlapping data. The array is returned by value, and given that it represents the entire stream, so what would it even overlap with? Are you worried that the returned object actually has reference semantics (i.e. that it just holds pointers to storage that is re-used in other calls to the same function?)
If send and receive buffers overlap in any system, that's a bug, unless special care is taken that the use is completely serialized. (I.e. a buffer is somehow used only for sending and only for receiving, without any mixup.)
Why dont you use a fixed length header followed by variable length packet with the header holding the information of length of packet.
This way you can avoid worrying about packet boundaries. Say for example instead of just sending the string send the length of the string followed by the string. In the receiver end always read the length and then based on the length read the string.
Project: typical chat program. Server must receive text from multiple clients and fan each input out to all clients.
In the server I want to have each client to have a struct containing the socket fd and a std::queue. Each structure will be on a std::list.
As input is received from a client socket I want to iterate over the list of structs and put new input into each client struct's queue. A string is new[ed] because I don't want copies of the string multiplied over all the clients. But I also want to avoid the headache of have multiple pointers to the string spread out and deciding when it is time to finally delete the string.
Is this an appropriate occassion for a shared pointer? If so, is the shared_ptr incremented each time I push them into the queue and decremented when I pop them from the queue?
Thanks for any help.
This is a case where a pseudo-garbage collector system will work much better than reference counting.
You need only one list of strings, because you "fan every input out to all clients". Because you will add to one end and remove from the other, a deque is an appropriate data structure.
Now, each connection needs only to keep track of the index of the last string it sent. Periodically (every 1000th message received, or every 4MB received, or something like that), you find the minimum of this index across all clients, and delete strings up to that point. This periodic check is also an opportunity to detect clients which have fallen far behind (possible broken connection) and recover. Without this check, a single stuck client will cause your program to leak memory (even under the reference counting scheme).
This scheme is several times less data than reference counting, and also removes one of the major points of cache contention (reference counts must be written from multiple threads, so they ruin performance). If you aren't using threads, it'll still be faster.
That is an appropriate use of a shared_ptr. And yes, the use count will be increment because a new shared_ptr will be create to push.
I am trying to use socket::receive in boost asio to receive data over network. I want it to block until certain size of data is available. For that it seems that I will have to set message_flag argument to the receive function, but I cannot find the information on what value can I pass so that the receive will be blocked until certain size of data is available. All I seen that its an integer.
Can someone tell me what are valid values that can be passed?
For that it seems that I will have to
set message_flag argument to the
receive function
This is incorrect. There are overloads to the async_read or read free function. They will read as many bytes as you request based on the buffer size:
Remarks
This overload is equivalent to
calling:
boost::asio::read(
s, buffers,
boost::asio::transfer_all());
I'm using C++ and wondering if I can just send an entire int array over a network (using basic sockets) without doing anything. Or do I have to split the data up and send it one at a time?
Yes.
An array will be laid out sequentially in memory so you are free to do this. Simply pass in the address of the first element and the amount of data and you'll send all data.
You could definitely send an array in one send, however you might want to do some additional work. There are issues with interpreting it correctly at the receiving end. For example, if using different machine architectures, you may want to convert the integers to network order (e.g., htonl).
Another thing to keep in mind is the memory layout. If it is a simple array of integers, then it would be contiguous in memory and a single send could successfully capture all the data. If, though, (and this is probably obvious), you have an array with other data, then the layout definitely needs consideration. A simple example would be if the array had pointers to other data such as a character string, then a send of the array would be sending pointers (and not data) and would be meaningless to the receiver.