I have a java 8 stream with underlying IOStream and I want to make sure that my method closes that stream.
Is there any way to check it with unit test?
I think providing a handler to Stream's onClose method is the easiest way to do this.
AtomicBoolean wasClosed = new AtomicBoolean(false);
Stream<> stream = Stream.of(foo, bar).onClose(() -> wasClosed.set(true));
// ...code under test that uses the stream
assertThat(wasClosed.get()).isTrue();
For what it's worth, I legitimately needed to test this, as we make Streams from JDBC ResultSets, and the actual Stream relies on onClose to close the ResultSet.
Streams have a BaseStream.close() method and implement AutoCloseable, but nearly all stream instances do not actually need to be closed after use. Generally, only streams whose source is an IO channel (such as those returned by Files.lines(Path, Charset)) will require closing. Most streams are backed by collections, arrays, or generating functions, which require no special resource management. (If a stream does require closing, it can be declared as a resource in a try-with-resources statement.)
https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html
I have a C++ Node.js extension that does network communication. Currently, it creates its own TCP connections in C. I would like to have it use sockets created in Node to take advantage of standard libraries like Cluster and to be able to use the existing event loop.
The solution I see is to simply create a net.Socket object and then extract the uv_tcp_t value from the underlying TCPWrap object. There are a couple of issues I see with this option:
The Socket documentation seems to indicate that a socket immediately starts reading when it connects. I would expect that to cause data loss if I want to read on the underlying UV socket in the extension instead of listening for the 'data' event in JavaScript.
While the TCPWrap class has a function to get the underlying uv_tcp_t struct, it does not seem to have an API to relinquish ownership of that struct. I expect this to cause problems later related to disposing of the struct and ownership of its data member (used for user data).
Is there any way to avoid these issues and use the socket in the extension?
I am working on an application that must used a proprietary protocol. The protocol is object based and thanks to Stack Overflow I have weeded my way through much of my struggles developing a framework that is both intuitive and efficient. I have read many of the resources on network programming on this site and others and am trying to take to heart most of the information. One thing that is pointed out regularly is to avoid dynamic memory allocation. I am struggling working out a way of building my packets efficiently without DMA and remaining as OOP as possible.
A summary of my situation:
At the most abstracted level I want to provide objects that resemble an object on the server side that we are communicating to
class AnObject
{
public:
AnObject();
~AnObject();
void SetVal(int Val);
int ReadVal();
private:
char* Seg_Path;
}
Other methods may be implemented and inheritance would come into play to resemble specific objects but thats the idea. Now these objects are grouped on the server side an access to a specific object requires opening a connection to that group of objects
class AnObject_Group
{
public:
Connect(char* ConnectData);
private:
int ConnectionID;
struct ConnectionSpecificData;
};
Now there is one more level of abstraction a sort of Socket i created.
class My_Socket
{
public:
Connect(char* ipAddress);
private:
int Session_Number;
SOCKET Socket;
}
Please note I have left lots of details of the classes out because my question is specifically about higher level structuring. I have left the Send Recv methods out because that is a part of my question.
Now if AnObject issues a read it has to build a small packet an send it through AnObject_Group who would place some information in front and send it through My_Socket. There are several other issues that come into play. The server can be quite slow to respond so I am attempting to do this ASync. I intend to have one thread who works all my My_Socket and handles there send/recv through the WinSock but I have to ensure any message/buffer works with this. The other issue I struggle with is its not always the same structure i.e. it could go AnObject->AnObject_Group->AnObject_Group->My_Socket and further more the data put infront at each stage is not fixed its usually decided when the data is put in front so AnObject doesnt know how much data AnObject_Group needs to add and how much My_Socket needs to add.
I am not even sure what order to build these packets in should I send a message to My_Socket saying I want to send then let My_Socket assemble the packet or do I build the packet from AnObject who calls a method in AnObject_Group who calls a method in MySocket. Should my buffer lie in My_Socket or buffers in every AnObject(there may be many of these)
Sorry if my question is vague im a first time asker but i feel like I have read and searched so much and havent been able to find some clear help on this sort of matter. Many post talk about how to send or recv or handle multithread with sockets but I feel this is a bit tricky to get a scalable solution
Thanks Alot
I am new to openSSL and want to know about the difference in using SSL_* and BIO_* functions for reading and writing data. Also it would be great if we could have some examples telling the usecases for both.
Thanks
Ravi
SSL_* functions operate on an SSL connection. BIO_* functions stand for basic input and output which are used for reading and writing operations over different input/output devices such as file, memory buffer or even socket connection.
SSL_* function performs the required encryption/decryption of the data whereas BIO_* does not.
There are plenty of use cases for both.
For SSL_* whenever you want to make an SSL client or server, you need this.
For reading and writing from the file or memory buffer, you may need BIO_* function. Common use is some i2d_ or d2i_ functions which writes or reads to/from an input/output device. For example, you want to write your public key to a BIO_* which can be memory buffer or a file, you can open the input into BIO * structure. Your writing code will not make any distinction about file or buffer and will write over BIO *
I think I'm in a problem. I have two TCP apps connected to each other which use winsock I/O completion ports to send/receive data (non-blocking sockets).
Everything works just fine until there's a data transfer burst. The sender starts sending incorrect/malformed data.
I allocate the buffers I'm sending on the stack, and if I understand correctly, that's a wrong to do, because these buffers should remain as I sent them until I get the "write complete" notification from IOCP.
Take this for example:
void some_function()
{
char cBuff[1024];
// filling cBuff with some data
WSASend(...); // sending cBuff, non-blocking mode
// filling cBuff with other data
WSASend(...); // again, sending cBuff
// ..... and so forth!
}
If I understand correctly, each of these WSASend() calls should have its own unique buffer, and that buffer can be reused only when the send completes.
Correct?
Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'?
And, if I am to use buffers that means I should copy the data to be sent from the source buffer to the temporary one, thus, I'd set SO_SNDBUF on each socket to zero, so the system will not re-copy what I already copied. Are you with me? Please let me know if I wasn't clear.
Take a serious look at boost::asio. Asynchronous IO is its specialty (just as the name suggests.) It's pretty mature library by now being in Boost since 1.35. Many people use it in production for very intensive networking. There's a wealth of examples in the documentation.
One thing for sure - it take working with buffers very seriously.
Edit:
Basic idea to handling bursts of input is queuing.
Create, say, three linked lists of pre-allocated buffers - one is for free buffers, one for to-be-processed (received) data, one for to-be-sent data.
Every time you need to send something - take a buffer off the free list (allocate a new one if free list is empty), fill with data, put it onto to-be-sent list.
Every time you need to receive something - take a buffer off the free list as above, give it to IO receive routine.
Periodically take buffers off to-be-sent queue, hand them off to send routine.
On send completion (inline or asynchronous) - put them back onto free list.
On receive completion - put buffer onto to-be-processed list.
Have your "business" routine take buffers off to-be-processed list.
The bursts will then fill that input queue until you are able to process them. You might want to limit the queue size to avoid blowing through all the memory.
I don't think it is a good idea to do a second send before the first send is finished.
Similarly, I don't think it is a good idea to change the buffer before the send is finished.
I would be inclined to store the data in some sort of queue. One thread can keep adding data to the queue. The second thread can work in a loop. Do a send and wait for it to finish. If there is more data do another send, else wait for more data.
You would need a critical section (or some such) to nicely share the queue between the threads and possibly an event or a semaphore for the sending thread to wait on if there is no data ready.
Now, what strategies can I implement in order to maintain a big sack of such buffers, how should I handle them, how can I avoid performance penalty, etc'?
It's difficult to know the answer without knowing more about your specific design. In general I'd avoid maintaining your own "sack of buffers" and instead use the OS's built in sack of buffers - the heap.
But in any case, what I would do in the general case is expose an interface to the callers of your code that mirror what WSASend is doing for overlapped i/o. For example, suppose you are providing an interface to send a specific struct:
struct Foo
{
int x;
int y;
};
// foo will be consumed by SendFoo, and deallocated, don't use it after this call
void SendFoo(Foo* foo);
I would require users of SendFoo allocate a Foo instance with new, and tell them that after calling SendFoo the memory is no longer "owned" by their code and they therefore shouldn't use it.
You can enforce this even further with a little trickery:
// After this operation the resultant foo ptr will no longer point to
// memory passed to SendFoo
void SendFoo(Foo*& foo);
This allows the body of SendFoo to send the address of the memory down to WSASend, but modify the passed in pointer to NULL, severing the link between the caller's code and their memory. Of course, you can't really know what the caller is doing with that address, they may have a copy elsewhere.
This interface also enforces that a single block of memory will be used with each WSASend. You are really treading into more than dangerous territory trying to share one buffer between two WSASend calls.