Large File I/O Async vs. Multi-Threaded Sync I/O [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've got a Linux program which is copying fairly large files(400MB to 10GB) to a remote NFS server. I am currently using synchronous I/O calls to copy the data to the NFS mount. All of these calls are occurring within separate threads in a thread pool. So I'm not really blocking the operation of the main thread.
I've heard a lot about using Linux AIO for tasks like these. But I don't really see the advantage it would give for large files.
What are some of the pros/cons of using AIO versus running synchronous IO in threads? Are there any statistical comparisons of this type of scenario out there in the web?

To faster data transfer, you may want to transfer file chunks in parallel. This requires many parallel threads. With AIO, you can initiate multiple data transfers with limited number of threads, so saving memory for thread stacks. But if the number of parallel data transfers that network and NFS server can process is limited, better use threads for simplicity of programming.

Related

How to handle multiple clients connected to a server in CPP? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am trying to learn winsock2 by following a tutorial. The problem is that the last section where it tells you about handling multiple clients, has empty code. How would this be achieved with multi-threading in a nice-mannered way?
Code: https://pastebin.com/D3L8CgAi
Since links to pastebin must be accompanied by code, I need to add this.
To clarify: I would not use threads to handle multiple clients.
To your question:
1 thread should listen for new connections.
When a connection is accepted a new socket is created.
For each accepted socket: create a thread for reading/writing to that socket.
The reason I would not implement it this way, is because it will not scale well. After ~100 concurrent connections (maybe more, maybe less) the process will crash due to out of memory. (Threads are expensive).
Google "multi thread socket windows C++" you should find numerous examples including videos with explanations.
If you really want to create a scalable server review libraries such as libevent (which wrap asynchronous mechanisms such as epoll).

What design pattern or programming technique to use to separate data flow from control flow? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
In a distributed system there is one management instance and multiple processing instances. The management instance sends commands to the different processing instances. So the processing nodes need to have some command parsing logic and more logic to act on these commands.
Throughput is important for the processing instances so the actual data flow should be impacted as little as possible.
Is there a design pattern or programming technique I can use to clearly separate data and control flow in the processing instances while keeping performance of the data flow as high as possible?
Edit:
The general implementation as of now is like follows: There are N processing threads, pooled, and a single control thread. At least virtually, they all have their own private data structures. What the control thread can do is change the actual thread function. I see design patterns as general, high level designs which I don't have to follow closely, but still I am interested if there is such a high level design that minimizes the disturbance of the processing threads.
I'm targeting C++17, if that should be of any concern.

Use clojure to develop game server [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm developing a server for a game. You kown in games,many data structures should be mutable. But Clojure's data structures is immutable. Is there some good idea to
do this? Should i use clojure for it?
Mutating data structures allows you to squeeze the last ounces of performance out of your code, but given that you're writing a server, network latency probably has a much greater impact than memory allocations. Clojure should be suitable, certainly as a starting point.
While Clojure's data structures are immutable, application state can be managed via atoms, refs, core.async loop state, and data pipelines. A Clojure application is hardly static just because its data structures are.
The biggest risk you face right now is figuring out what to build, and Clojure's live development model will accelerate the learning loop. You can redefine functions while the server is running and see their effects immediately.
I suggest you prototype your server in Clojure, then, if performance gains need to be made, profile the code. If necessary, you can introduce transients and volatiles, and port performance critical sections to Java.

Blocking and Non-blocking I/Os in C++ definition and implementation [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Recently due to a c++ project I'm dealing with I came across the concept of Non-blocking I/Os in C++.
If someone needs a cross-platform solution, can the non-blocking mode be implemented without the use of boost.asio , libuv and any relevant external library? An example would be quite helpful to distinguish the difference between blocking and non-blocking I/O.
I/O - in any computational sense - takes time. Where this time is used is determined by whether an I/O operation is "blocking" or "non-blocking".
Non-blocking I/O takes place out of the calling thread. This usually means a "busy wait" for a signal from the I/O telling you there is data.
Blocking I/O operations will halt execution of the calling thread until the operation is done.
For network sockets:
Non-blocking read operations will be "instant" and you will need to poll the socket for information on whether it has finished reading and if so, the size of the data that was read.
Blocking read operations will wait until data has been read before unwaiting.
For files:
Similar case to network sockets only you are reading from a hardware device so its probably going to be quicker.
As for multithreading, it is a vicious beast to work with. As a rule of thumb it shouldnt be in your vocabulary of C++ terminology unless you absolutely need it.

TCP vs Shared Memory? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I understand, if shared memory is used correctly it can be faster than any other kind of IPC. My question is a bit more specific: If I transfer many small packets, eg 100 bytes, from different programs to one main program, what kind of speed difference can I expect?
The benefit from using shared memory will not be so much, because you will end up with using conditional variables on the shared memory (cf. pthread_condattr_setpshared; it will be a substantial coding work, by the way.) Then your logic is governed by the OS scheduler, and it's not very different from using localhost TCP connection which has a different and fast implementation than standard TCP on most OS.
If it's OK to entirely rely on a spinlock on the shared mem, then you will indeed realize substantial speed up like x3 fold.