Blocking and Non-blocking I/Os in C++ definition and implementation [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Recently due to a c++ project I'm dealing with I came across the concept of Non-blocking I/Os in C++.
If someone needs a cross-platform solution, can the non-blocking mode be implemented without the use of boost.asio , libuv and any relevant external library? An example would be quite helpful to distinguish the difference between blocking and non-blocking I/O.

I/O - in any computational sense - takes time. Where this time is used is determined by whether an I/O operation is "blocking" or "non-blocking".
Non-blocking I/O takes place out of the calling thread. This usually means a "busy wait" for a signal from the I/O telling you there is data.
Blocking I/O operations will halt execution of the calling thread until the operation is done.
For network sockets:
Non-blocking read operations will be "instant" and you will need to poll the socket for information on whether it has finished reading and if so, the size of the data that was read.
Blocking read operations will wait until data has been read before unwaiting.
For files:
Similar case to network sockets only you are reading from a hardware device so its probably going to be quicker.
As for multithreading, it is a vicious beast to work with. As a rule of thumb it shouldnt be in your vocabulary of C++ terminology unless you absolutely need it.

Related

Are virtual functions safe in real-time audio programming? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Real-time audio programming has particular constraints, due to the need to avoid audio glitches. Specifically, allocating and deallocating memory, or otherwise interacting with the operating system, should not be done in the audio thread.
When calling a virtual function, the program must find the relevant virtual table, lookup the pointer, and then call the function from the pointer. Is this process real-time safe?
Yes, it's fine. Virtual function dispatch is just like writing (*(obj->vtable[5]))(obj, args...). It doesn't involve any operations of unknown or possibly surprising complexity like allocating memory or I/O.
A real-time system is not defined by the programming language, but rather the OS/hardware.
So long as the system is real-time, and the software executing is deterministic, you will have real-time performance. In regards to your question, the use of virtual functions does not violate determinism.
Another concern might be latency. The amount of latency that you might encounter will be determined by the OS, the hardware, and the software, but as Matt Timmermans mentioned in his answer, virtual functions cause little overhead and will not contribute significantly to latency.

How to handle multiple clients connected to a server in CPP? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am trying to learn winsock2 by following a tutorial. The problem is that the last section where it tells you about handling multiple clients, has empty code. How would this be achieved with multi-threading in a nice-mannered way?
Code: https://pastebin.com/D3L8CgAi
Since links to pastebin must be accompanied by code, I need to add this.
To clarify: I would not use threads to handle multiple clients.
To your question:
1 thread should listen for new connections.
When a connection is accepted a new socket is created.
For each accepted socket: create a thread for reading/writing to that socket.
The reason I would not implement it this way, is because it will not scale well. After ~100 concurrent connections (maybe more, maybe less) the process will crash due to out of memory. (Threads are expensive).
Google "multi thread socket windows C++" you should find numerous examples including videos with explanations.
If you really want to create a scalable server review libraries such as libevent (which wrap asynchronous mechanisms such as epoll).

How Erlang is better than other language in doing concurrency? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I don't understand why erlang itself is great with concurrency
Is there anyway other language such as C# could be as great as erlang if we do some trick?
Or it is the very specific language feature of erlang that most language don't have?
Could we write C to be like erlang?
One Major property of Erlang is that it was built from the ground up to be a concurrent language. Erlang supports hundreds of thousands of lightweight processes in a single virtual machine. Because Erlang's processes are completely independent of OS processes they are very lightweight, with low overhead per-process. Thus when using Erlang for concurrent oriented programming you get alot of advantages out of the box.
Fast process creation/destruction
Ability to support millions of concurrent processes with largely unchanged characteristics.
Fast asynchronous message passing.
Copying message-passing semantics (share-nothing concurrency).
Process monitoring.
Selective message reception.
This Erlang style concurrency is not impossible to do in C, but it would be hard to do it. Read this blog for more information on Erlang style concurrency

Bankers Algorithm with realtime process [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How can we give a process in taskmanager (like notepad.exe) as an input as process for my Bankers Algorithm (Deadlock detection) ???
It's going to be hard and probably unfeasible to keep track of all the OS / external conditions to implement a real deadlock-prevention algorithm on a real application. Modern OSes (when we're not talking about RT-aware systems) prefer not to implement such algorithms due to their overwhelming complexity and expensiveness.
In other terms you can get away from a Windows deadlock, in the worst case, with a simple reboot. And given how many times this happens it isn't deemed a huge problem in the desktop OSes market.
Thus I recommend to write a simple test case with a dummy application that will either
Serve your purpose
Allow you to know exactly what's being used by your application and let you manage the complexity
As a sidenote: applications like notepad.exe or similar are not real-time processes even if you give them "Real Time" priority in the Windows task manager (and not even soft-real time). Real real-time processes have time constraints (i.e. deadlines) that they MUST observe. This isn't true in any desktop OS since they're just built with a different concept in mind (time sharing). Linux has some RT patches (e.g. Xenomai) to render the scheduling algorithm in the kernel a real real-time one, but I'm not aware of the status of that patch right now.

Large File I/O Async vs. Multi-Threaded Sync I/O [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've got a Linux program which is copying fairly large files(400MB to 10GB) to a remote NFS server. I am currently using synchronous I/O calls to copy the data to the NFS mount. All of these calls are occurring within separate threads in a thread pool. So I'm not really blocking the operation of the main thread.
I've heard a lot about using Linux AIO for tasks like these. But I don't really see the advantage it would give for large files.
What are some of the pros/cons of using AIO versus running synchronous IO in threads? Are there any statistical comparisons of this type of scenario out there in the web?
To faster data transfer, you may want to transfer file chunks in parallel. This requires many parallel threads. With AIO, you can initiate multiple data transfers with limited number of threads, so saving memory for thread stacks. But if the number of parallel data transfers that network and NFS server can process is limited, better use threads for simplicity of programming.