What's the deal with boost.asio and file i/o? - c++

I've noticed that boost.asio has a lot of examples involving sockets, serial ports, and all sorts of non-file examples. Google hasn't really turned up a lot for me that mentions if asio is a good or valid approach for doing asynchronous file i/o.
I've got gobs of data i'd like to write to disk asynchronously. This can be done with native overlapped io in Windows (my platform), but I'd prefer to have a platform independent solution.
I'm curious if
boost.asio has any kind of file support
boost.asio file support is mature enough for everyday file i/o
Will file support ever be added? Whats the outlook for this?

Has boost.asio any kind of file support?
Starting with (I think) Boost 1.36 (which contains Asio 1.2.0) you can use [boost::asio::]windows::stream_handle or windows::random_access_handle to wrap a HANDLE and perform asynchronous read and write methods on it that use the OVERLAPPED structure internally.
User Lazin also mentions boost::asio::windows::random_access_handle that can be used for async operations (e.g. named pipes, but also files).
Is boost.asio file support mature enough for everyday file i/o?
As Boost.Asio in itself is widely used by now, and the implementation uses overlapped IO internally, I would say yes.
Will file support ever be added? Whats the outlook for this?
As there's no roadmap found on the Asio website, I would say that there will be no new additions to Boost.Asio for this feature. Although there's always the chance of contributors adding code and classes to Boost.Asio. Maybe you can even contribute the missing parts yourself! :-)

boost::asio file i/o on Linux
On Linux, asio uses the epoll mechanism to detect if a socket/file descriptor is ready for reading/writing. If you attempt to use vanilla asio on a regular file on Linux you'll get an "operation not permitted" exception because epoll does not support regular files on Linux.
The workaround is to configure asio to use the select mechanism on Linux. You can do this by defining BOOST_ASIO_DISABLE_EPOLL. The trade-off here being select tends to be slower than epoll if you're working with a large number of open sockets. Open a file regularly using open() and then pass the file descriptor to a boost::asio::posix::stream_descriptor.
boost::asio file i/o on Windows
On Windows you can use boost::asio::windows::object_handle to wrap a Handle that was created from a file operation. See example.

boost::asio::windows::random_access_handle is the easiest way to do this, if you need something advanced, for example asynchronous LockFileEx or something else, you might extend asio, add your own asynchronous events. example

io_uring has changed everything.
asio now support async file read/write.
See the releases notes:
asio 1.21.0 releases notes

ASIO supports overlapped I/O on Windows where support is good. On Unixes this idea has stagnated due to:
Files are often located on the same physical device, accessing them sequentially is preferable.
File requests often complete very rapidly because they are physically closeby.
Files are often critical to complete the basic operation of a program (e.g. reading in its configuration file must be done before initializing further)
The one common exception is serving files directly to sockets. This is such a common special-case that Linux has a kernel function that handles this for you. Again, negating the reason to use asynchronous file I/O.
In Short: ASIO appears to reflect the underlying OS design philosophy, overlapped I/O being ignored by most Unix developers, so it is not supported on that platform.

Asio 1.21 appears to have added built-in filesystem support.
For instance, asio::stream_file now exists with all the async methods you'd expect.

Linux has an asio Library that is no harder to use than Windows APIs for this job (I've used it). Both sets of operating systems implement the same conceptual architecture. They differ in details that are relevant to writing a good library, but not to the point that you cannot have a common interface for both OS platforms (I've used one).
Basically, all flavors of Async File I/O follow the "Fry Cook" architecture. Here's what I mean in the context of a Read op: I (processing thread) go up to a fast food counter (OS) and ask for a cheeseburger (some data). It gives me a copy of my order ticket (some data structure) and issues a ticket in the back to the cook (the Kernel & file system) to cook my burger. I then go sit down or read my phone (do other work). Later, somebody announces that my burger is ready (a signal to the processing thread) and I collect my food (the read buffer).

Related

Regarding handling more than 1024 socket descriptors

I have written a chat server using C on Linux. I have tested the same and it works fine with respect to performance. The only thing which lags is that I am using select system call for handling of sockets descriptors. Since select has the limit of 1024 so at max my chat server can handle only 1024 users concurrently.
I know that the other option which I can use is poll, but not so sure about it and its performance as compared to select.
Please suggest me the most effective way by which I can resolve this situation.
poll() can be used as an almost drop-in replacement for select(), and will allow you to exceed 1024 file descriptors (you can make make the array passed to poll() as large as you want).
It will have similar performance characteristics to select(), since both require the kernel and userspace application to scan the entire array - but if select() is working OK for you, then poll() should too. (There is actually a slight performance improvement in poll() - the .events field, specifying the events you are interested in for each file descriptor, is not changed by poll(), so you don't have to rebuild the array before every call like you do with the file descriptor sets passed to select()).
If you later find yourself having performance problems caused by scanning the poll file descriptor array, you can consider switching to the epoll interface, which is more complicated but also scales better with very large numbers of file descriptors.
Your question is known as the C10K problem (how to deal with more than 10 thousands simultaneous connections). You'll find lot of resources on the web, e.g. this one.
And you should consider select as an obsolete system call. Even with only dozens of file descriptors, you should at least prefer poll
Notice that Qt and Gtk provide you with an event loop machinery, often using poll (and QtCore or Glib can be used outside of graphical interfaces). There is also libev and libevent. I suggest using one of them.
Linux has no 1024 limit on select(). But:
select() performance is very poor
FreeBSD does :)
Your can use poll(). But its performance suffers when number of active connections increases.
Using epoll() is preferable on Linux however I would suggest to use libevent
libevent is fast, clean and portable way to implement heavy loaded servers and for linux it has epoll under the hood.

Getting to know the basics of Asynchronous programming on *nix

For some time now I have been googling a lot to get to know about the various ways to acheive asynchronous programming/behavior on nix machines and ( as known earlier to me ) got confirmed on the fact that there is still no TRULY async pattern (concurrency using single thread) for Linux as available for Windows(IOCP).
Below are the few alternatives present for linux:
select/poll/epoll :: Cannot be done using single thread as epoll is still blocking call. Also the monitored file descriptors must be opened in non-blocking mode.
libaio:: What I have come to know about is that its implementation sucks and its still notification based instead of being completion based as with windows I/O completion ports.
Boost ASIO :: It uses epoll under linux and thus not a true async pattern as it spawns thread which are completely abstracted from user code to acheive the proactor design pattern
libevent :: Any reason to go for it if I prefer ASIO?
Now Here comes the questions :)
What would be the best design pattern for writing fast scalable network server using epoll (ofcourse, will have to use threads here :( )
I had read somewhere that "only sockets can be opened in non-blocking mode" hence epoll supports only sockets and hence cannot be used for disk I/O.
How true is the above statement and why async programming cannot be done on disk I/O using epoll ?
Boost ASIO uses one big lock around epoll call. I didnt actually understand what can be its implications and how to overcome it using asio itself. Similar question
How can I modify ASIO pattern to work with disk files? Is there any recommended design pattern ?
Hope somebody will able to answer all the questions with nice explanations also. Any link to source where the implementation details of epoll and AIO design patterns are exaplained is also appreciated.
Boost ASIO :: It uses epoll under linux and thus not a true async
pattern as it spawns thread which are completely abstracted from user
code to acheive the proactor design pattern
This is not correct. The Asio library uses epoll() by default on most recent Linux kernel versions. however, threads invoking io_service::run() will invoke callback handlers as needed. There is only one place in the Asio library that a thread is used to emulate an asynchronous interface, it is well described in the documentation:
An additional thread per io_service is used to emulate asynchronous
host resolution. This thread is created on the first call to either
ip::tcp::resolver::async_resolve() or
ip::udp::resolver::async_resolve().
This does not make the library "not a true async pattern" as you claim, in fact its name would disagree with you by definition.
1) What would be the best design pattern for writing fast scalable network server using epoll (of course, will have to use threads here :(
)
I suggest using Boost Asio, it uses the proactor design pattern.
3) Boost ASIO uses one big lock around epoll call. I didnt actually
understand what can be its implications and how to overcome it using
asio itself
The epoll reactor uses a mutex to dispatch handlers, though in practice this is not a big concern for most applications. There are application specific ways to mitigate this behavior, such as an io_service per CPU to exploit data locality. See my answer to a similar question on this topic. It is also discussed on the Asio mailing list frequently.
4) How can I modify ASIO pattern to work with disk files? Is there any
recommended design pattern?
The Asio library does not natively support file I/O as you noted. There have been several attempts to add it to the library, I'd suggest discussing on the mailing list.
First of all:
got confirmed on the fact that there is still no TRULY async pattern (concurrency using single thread) for Linux as available for Windows(IOCP).
You probably has a small misconception, asynchronous can be build on top of "polling" api.
More then that "reactor" (epoll-like) API is more powerful then "proactor" API (IOCP) as
the second can be implemented in terms of the first one (but not the other way around).
Also some operations that are "truly" asynchronous for example like disk I/O, some some other tools can be with combination of signals and Linux specific signalfd can provide full coverage of some other cases.
Bottom line. epoll is truly asynchronous I/O

C++: Most common way to talk to one application from the other one

In bare outlines, I've got an application which looks through the directories at startup and creates special files' index - after that it works like daemon. The other application creates such 'special' files and places them in some directory. What way of informing the first application about a new file (to index it) is the most common, simple (the first one is run-time, so it shouldn't slow it too much), and cross-platform if it is possible?
I've looked through RPC and IPC but they are too heavy (also non-cross-platform and slow (need a lot of features to work - I need a simple light well-working way), probably).
Pipes would be one option: see Network Programming with Pipes and Remote Procedure Calls (Windows) or Creating Pipes in C (Unix).
I haven't done this in a while but from my experience with RPC, DCOM, COM, .NET Remoting, and socket programming, I think pipes is the most straightforward and efficient option.
For windows (NTFS) you can get notification from OS that directory was changed. But it is not crosspl. and not about two apps.
"IPC but them are too heavy" - no no, they are not heavy at all. You should look at named pipes - this IPC is fastest and it is in both Win/Unix-like with slight differences. Or sockets!
eisbaw suggested TCP. I'd say, to make it even more simple, use UDP.
Create a listening thread that will receive packets, and handle it from there - on all applications.
Since it is on the same PC you'll never lose any packet, something that UDP could mistakenly do when on network.
Each application instance will need a special port but this is easy to configure with configuration files that you (I assume) already have.
Keep it simple (:
Local TCP sockets are guarenteed to work - as already mentioned by Andrey
Shared memory would be another option, take a look at
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2044.html
As Andrey noted, if you agree on the full path ahead of time, you can just have the OS tell you when it's added. All major platforms actually support this in some form. You can use a cross-platform library for this, such as QFileSystemWatcher.
EDIT: I don't think QFileSystemWatcher will cause too much of a performance hit. It definitely relies on the underlying OS for notifications on Linux, FreeBSD, and Mac OS (and I think Windows). See http://qtnode.net/wiki/QFileSystemWatcher
memory mapped files, socket, and named pipes are all highly efficient, cross platform, ipc mechanisms. Well, the apis to access named pipes and memory mapped files differ between POSIX and Win32, but the basic mechanisims are similar enough that its easy to make a cross platform wrapper. Sockets and named pipes tend to be fast because, in inter-process situations, the OS developers (of most common OSs) have built in shortcuts that essentially makes the socket / named pipe write a rather simple wrap of a memory section.

Most suitable asynchronous socket model for an instant messenger client?

I'm working on an instant messenger client in C++ (Win32) and I'm experimenting with different asynchronous socket models. So far I've been using WSAAsyncSelect for receiving notifications via my main window. However, I've been experiencing some unexpected results with Winsock spawning additionally 5-6 threads (in addition to the initial thread created when calling WSAAsyncSelect) for one single socket.
I have plans to revamp the client to support additional protocols via DLL:s, and I'm afraid that my current solution won't be suitable based on my experiences with WSAAsyncSelect in addition to me being negative towards mixing network with UI code (in the message loop).
I'm looking for advice on what a suitable asynchronous socket model could be for a multi-protocol IM client which needs to be able to handle roughly 10-20+ connections (depending on amount of protocols and protocol design etc.), while not using an excessive amount of threads -- I am very interested in performance and keeping the resource usage down.
I've been looking on IO Completion Ports, but from what I've gathered, it seems overkill. I'd very much appreciate some input on what a suitable socket solution could be!
Thanks in advance! :-)
There are four basic ways to handle multiple concurrent sockets.
Multiplexing, that is using select() to poll the sockets.
AsyncSelect which is basically what you're doing with WSAAsyncSelect.
Worker Threads, creating a single thread for each connection.
IO Completion Ports, or IOCP. dp mentions them above, but basically they are an OS specific way to handle asynchronous I/O, which has very good performance, but it is a little more confusing.
Which you choose often depends on where you plan to go. If you plan to port the application to other platforms, you may want to choose #1 or #3, since select is not terribly different from other models used on other OS's, and most other OS's also have the concept of threads (though they may operate differently). IOCP is typically windows specific (although Linux now has some async I/O functions as well).
If your app is Windows only, then you basically want to choose the best model for what you're doing. This would likely be either #3 or #4. #4 is the most efficient, as it calls back into your application (similar, but with better peformance and fewer issues to WSAsyncSelect).
The big thing you have to deal with when using threads (either IOCP or WorkerThreads) is marshaling the data back to a thread that can update the UI, since you can't call UI functions on worker threads. Ultimately, this will involve some messaging back and forth in most cases.
If you were developing this in Managed code, i'd tell you to look at Jeffrey Richter's AysncEnumerator, but you've chose C++ which has it's pros and cons. Lots of people have written various network libraries for C++, maybe you should spend some time researching some of them.
consider to use the ASIO library you can find in boost (www.boost.org).
Just use synchronous models. Modern operating systems handle multiple threads quite well. Async IO is really needed in rare situations, mostly on servers.
In some ways IO Completion Ports (IOCP) are overkill but to be honest I find the model for asynchronous sockets easier to use than the alternatives (select, non-blocking sockets, Overlapped IO, etc.).
The IOCP API could be clearer but once you get past it it's actually easier to use I think. Back when, the biggest obstacle was platform support (it needed an NT based OS -- i.e., Windows 9x did not support IOCP). With that restriction long gone, I'd consider it.
If you do decide to use IOCP (which, IMHO, is the best option if you're writing for Windows) then I've got some free code available which takes away a lot of the work that you need to do.
Latest version of the code and links to the original articles are available from here.
And my views on how my framework compares to Boost::ASIO can be found here: http://www.lenholgate.com/blog/2008/09/how-does-the-socket-server-framework-compare-to-boostasio.html.

alternatives to winsock2 with example server source in c++

i'm using this example implementation found at http://tangentsoft.net/wskfaq/examples/basics/select-server.html
This is doing most of what I need, handles connections without blocking and does all work in its thread (not creating a new thread for each connection as some examples do), but i'm worried since i've been told winsock will only support max 64 client connectios :S
Is this 64 connections true?
What other choices do I have? It would be cool to have a c++ example for a similar implementation.
Thanks
Alternative library:
You should consider using boost asio. It is a cross platform networking library which simplifies many of the tasks you may have to do.
You can find the example source code you seek here.
About the 64 limit:
There is no hard 64 connection limit that you will experience with a good design. Basically if you use some kind of threading model you will not experience this limitation.
Here's some information on the limit you heard about:
4.9 - What are the "64 sockets" limitations?
There are two 64-socket limitations:
The Win32 event mechanism (e.g.
WaitForMultipleObjects()) can only
wait on 64 event objects at a time.
Winsock 2 provides the
WSAEventSelect() function which lets
you use Win32's event mechanism to
wait for events on sockets. Because it
uses Win32's event mechanism, you can
only wait for events on 64 sockets at
a time. If you want to wait on more
than 64 Winsock event objects at a
time, you need to use multiple
threads, each waiting on no more than
64 of the sockets.
The select() function is also limited
in certain situations to waiting on 64
sockets at a time. The FD_SETSIZE
constant defined in winsock.h
determines the size of the fd_set
structures you pass to select(). It's
defined by default to 64. You can
define this constant to a higher value
before you #include winsock.h, and
this will override the default value.
Unfortunately, at least one
non-Microsoft Winsock stack and some
Layered Service Providers assume the
default of 64; they will ignore
sockets beyond the 64th in larger
fd_sets.
You can write a test program to try
this on the systems you plan on
supporting, to see if they are not
limited. If they are, you can get
around this with threads, just as you
would with event objects.
Source
#Brian:
if ((gConnections.size() + 1) > 64) {
// For the background on this check, see
// www.tangentsoft.net/wskfaq/advanced.html#64sockets
// The +1 is to account for the listener socket.
cout << "WARNING: More than 63 client "
"connections accepted. This will not "
"work reliably on some Winsock "
"stacks!" << endl;
}
To the OP:
Why would you not want to use winsock2?
You could try to look at building your own server using IOCP, although making this cross-platform is a little tricky. You could look at Boost::asio like Brian suggested.
Before you decide that you need 'alternatives to winsock2" please read this: Network Programming for Microsoft Windows.
In summary, you DON'T need an 'alternative to Winsock2' you need to understand how to use the programming models supplied to full effect on the platform that you're targeting. Then, if you really need cross platform sockets code that uses async I/O then look at ASIO, but, if you don't really need cross platform code then consider something that actually focuses on the problems that you might have on the platform that you do need to focus on - i.e. something windows specific. Go back to the book mentioned above and take a look at the various options you have.
The most performant and scalable option is to use IO Completion Ports. I have some free code available from here that makes it pretty easy to write a server that scales and performs well on a windows (NT) based platform; the linked page also links to some articles that I've written about this. A comparison of my framework to ASIO can be found here: http://www.lenholgate.com/blog/2008/09/how-does-the-socket-server-framework-compare-to-boostasio.html.