My coworkers and I have a good feeling that OpenSSL more or less needs to get pitched from our application but I want some opinions on whether it really is this bad or whether there are issues in our use of this library that could be causing us trouble.
The setting: A multi-threaded C++ application that maintains a persistent SSL connection for each user.
At 500 users it has worked fine. I'm trying to increase the limit to 1000 and around 960 had a segfault in SSL_read. This read is the first I/O operation for this particular connection. I had to increase the file limit in ulimit from 1024 to 4096 to get up this high. So my questions are:
1) Is it possible the library needs to be configured to know to accept this many connections?
2) Is it a threading issue that may be solved with light use of mutexes? I can't afford to turn the entire SSL_read into a critical strip though.
3) Just a bad buggy library and needs to be thrown out?
Based on your comments, 1 thread per connection doesnt seem like an efficient usage of threads.
I would suggest a thread pool and use worker threads to handle received packets. The received packets could be enqueued in a queue and the worker threads would process packets from the queue. The openSsl connections could be stored in a container common to all threads. Care will have to be taken to handle the packets in order. And yes, synchronization (mutex) will be needed.
Related
I'm having trouble managing the work .post()'ed to Boost.Asio's io_context, having multiple questions about it (newbie warning).
Background: I'm writing a library that connects to a large number of different hosts for shorts periods at a time each (connect, send data, receive answer, close), and I figured using Boost.Asio. The documentation is scarce (too DRY?)
My current approach is this: (assuming a quad core machine): two physical cores run CPU bound sync operations, and post() additional work items to io_context. Two other threads are .run()ing and performing completion handlers.
1- The work scheduler
As per this amazing answer,
Boost.Asio may start some of the work as soon as it has been told about it, and other times it may wait to do the work at a later point in time.
When does boost.asio do what? On what basis is the queued work later processed?
2- Multiple Producers/ Multiple Consumers
As per This article,
At its core, Boost Asio provides a task execution framework that you can use to perform operations of any kind. You create your tasks as function objects and post them to a task queue maintained by Boost Asio. You enlist one or more threads to pick these tasks (function objects) and invoke them. The threads keep picking up tasks, one after the other till the task queues are empty at which point the threads do not block but exit.
I am failing to find a way to put a cap on the length of this task queue. This answer gives a couple of solutions, but they both involve locking, something I'd like to avoid as much as possible.
3- Are strands really necessary? How do I "disable them"
As detailed in this answer, boost uses an implicit strand per connection. Making potentially millions of connections, the memory savings by "bypassing" strands make sense to me. As the requests I make are independent (different host to each request), operations I make within a single connection is already serialized (callback chain) so I have no overlapping reads & writes, and no synchronization is expected from Boost.Asio. Does it make sense for me to try and bypass strands? If so, how?
4- Scaling design approach (A bit vague because I have no clue)
As stated in my background section, I'm running two io_contexts on two physical cores, each with two threads one for writing and one for reading. My goal here is to spew packets as fast as I can, and I have already
Compiled asio with BoringSSL (OpenSSL is a serious bottleneck)
Wrote my own c-ares resolver service to avoid async-ish DNS queries running in a thread loop.
But it still happens that my network driver starts timing out when multiple connections are opened. So how do I dynamically adjust boost.asio's throughput, the network adapter can cope with it?
My question(s) is most likely ill-informed as I'm no expert in network programming, and I know this a complex problem, I'd appreciate it if someone left pointers for me to look before closing the question or making it "dead".
Thank you.
I'm developing a peer to peer message parsing application. So one peer may need to handle many clients. And also there is a possibility to send and receive large data (~20 MB data as one message). There can be situations like many peers send large data to the same peer. I heard there are many solutions to handle these kind of a situation.
Use thread per peer
Using a loop to go through the peers and if there are data we can
recive
Using select function
etc.
What is the most suitable methodology or most common and accepted way to handle these kind of a situation? Any advice or hint are welcome.
Updated: Is there a good peer to peer distributed computing library or framework for C++ on windows platform
Don't use a thread per peer; past the number of processors, additional threads is likely only to hurt performance. You'd also have been expected to tweak the dwStackSize so that 1000 idle peers doesn't cost you 1000MB of RAM.
You can use a thread-pool (X threads handling Y sockets) to get a performance boost (or, ideally, IO Completion Ports), but this tends to work incredibly well for certain kinds of applications, and not at all for other kinds of applications. Unless you're certain that yours is suited for this, I wouldn't justify taking the risk.
It's entirely permissible to use a single thread and poll/send from a large quantity of sockets. I don't know precisely when large would have a concernable overhead, but I'd (conservatively) ballpark it somewhere between 2k-5k sockets (on below average hardware).
The workaround for WSAEWOULDBLOCK is to have a std::queue<BYTE> of bytes (not a queue of "packet objects") for each socket in your application (you populate this queue with the data you want to send), and have a single background-thread whose sole purpose is to drain the queues into the respective socket send (X bytes at a time); you can use blocking socket for this now (since it's a background-worker), but if you do use a non-blocking socket and get WSAEWOULDBLOCK you can just keep trying to drain the queue (here it won't obstruct the flow of your application).
You could use libtorrent.org which is built on top of boost (boost-asio ). It's focusing on efficiency and scalability.
I have not much experience in developing a socket in C++ but in C# I had really good experience accepting connections asynchronously and pass them to an own thread from a threadpool.
I'm running an fully operational IOCP TCP socket application. Today I was thinking about the Critical Section design and now I have one endless question in my head: global or per client Critical Section? I came to this because as I see there is no point to use multiple working threads if every threads depends on a single lock, right? I mean... now I don't see any performance issue with 100 simultaneous clients, but what if was 10000?
My shared resource is per client pre allocated struct, so, each client have your own IO context, socket and stuff. There is no inter-client resource share, so I think that is another point for use the per client CS. I use one accept thread and 8 (processors * 2) working threads. This applications is basicaly designed for small (< 1KB) packets but sometimes for file streaming.
The "correct" answer probably depends on your design, the number of concurrent clients and the performance that you require from the hardware that you have available.
In general, I find it best to go with the simplest thing that works and then profile to locate hot spots.
However... You say that you have no inter-client shared resources so I assume the only synchronisation that you need to do is around 'per-connection' state.
Since it's per connection the obvious (to me) design would be for the per-connection state to contain its own critical section. What do you perceive to be the downside of this approach?
The problem with a single shared lock is that you introduce contention between connections (and threads) that have no reason to block each other. This will adversely affect performance and will likely become a hot-spot as connection numbers rise.
Once you have a per connection lock you might want to look at avoiding using it as often as possible by having the IOCP threads simply lock to place completions in a per connection queue for processing. This has the advantage of allowing a single IOCP thread to work on each connection and preventing a single connection from having additional IOCP threads blocking on it. It also works well with 'skip completion port on success' processing.
First of all: sorry for my english.
Guys, I have a trouble with POSIX sockets and/or pthreads. I'm developing on embedded device(ARM9 CPU). On the device will work multithread tcp server. And it will be able to process a lot of incoming connections. Server gets connection from client and increase counter variable(unsigned int counter). Clients routines will run in separate threads. All clients will use 1 singleton class instance(in this class will be opened and closed same files). Clients works with files, then client thread closes connection socket, and calls pthread_exit().
So, my tcp server can't handle more than 250 threads(counter = 249 +1(server thread). And I got "Resource temporary unavailable". What's the problem?
Whenever you hit the thread limit - or as mentioned run out of virtual process address space due to the number of threads - you're.... doing it wrong. More threads don't scale. Especially not when doing embedded programming. You can handle requests on a thread pool instead. Use poll(2) to handle many connections on fewer threads. This is prettty well-trod territory and libraries (like ACE, asio) have been leveraging this model for good reason
The 'thread-per-request' model is mainly popular because of it's (perceived) simple design.
As long as you keep connections on a single logical thread (sometimes known as a strand) there is no real difference, though.
Also, if the handling of a request involves no blocking operations, you can never do better than polling and handling on a single thread after all: you can use the 'backlog' feature of bind/accept to let the kernel worry about pending connections for you! (Note: this assumed a single core CPU, on a dual core CPU this kind of processing would be optimal with one thread per CPU)
Edit Addition Re:
ulimit shows how much threads can OS handle, right? If yes, ulimit does not solve my problem because my app uses ~10-15 threads in same time.
If that's the case, you should really double check that you are joining or detaching all threads properly. Also think of the synchronization objects; if you consistently forget to call the relevant pthread *_destroy functions, you'll run into the limits even without needing it. That would of course be a resource leak. Some tools may be able to help you spot them (vlagrind/helgrind come to mind)
Use ulimit -n to check the number of file system handles. You can increase it for your current session if the number is too low.
Also you can edit /etc/security/limits.conf and to set a permanent limit
Usually, the first limit you are hitting on 32-bit systems is that you are running out of virtual address space when using default stack sizes.
Try explicitly specifying the stack size when creating threads (to less than 1 MB) or setting the default stack size with "ulimit -s".
Also note that you need to either pthread_detach or pthread_join your threads so that all resources will be freed.
I'm writing a TCP server on Windows Server 2k8. This servers receives data, parses it and then sinks it on a database. I'm doing now some tests but the results surprises me.
The application is written in C++ and uses, directly, Winsocks and the Windows API. It creates a thread for each client connection. For each client it reads and parses the data, then insert it into the database.
Clients always connect in simultaneous chunks - ie. every once in a while about 5 (I control this parameter) clients will connect simultaneously and feed the server with data.
I've been timing both the reading-and-parsing and the database-related stages of each thread.
The first stage (reading-and-parsing) has a curious behavoir. The amount of time each thread takes is roughly equal to each thread but is also proportional to the number of threads connecting. The server is not CPU starved: it has 8 cores and always less than 8 threads are connected to it.
For example, with 3 simultaneous threads 100k rows (for each thread) will be read-and-parsed in about 4,5s. But with 5 threads it will take 9,1s on average!
A friend of mine suggested this scaling behavoir might be related to the fact I'm using blocking sockets. Is this right? If not, what might be the reason for this behavoir?
If it is, I'd be glad if someone can point me out good resources for understanding non blocking sockets on Windows.
Edit:
Each client thread reads a line (ie., all chars untils a '\n') from the socket, then parses it, then read again, until the parse fails or a terminator character is found. My readline routine is based on this:
http://www.cis.temple.edu/~ingargio/cis307/readings/snaderlib/readline.c
With static variables being declared as __declspec(thread).
The parsing, assuming from the non networking version, is efficient (about 2s for 100k rows). I assume therefore the problem is in the multhreaded/network version.
If your lines are ~120–150 characters long, you are actually saturating the network!
There's no issue with sockets. Simply transfering 3 times 100k lines, 150 bytes each, over a 100 Mbps line (1 take 10 bytes/byte to account for headers) will take... 4.5 s! There is no problem with sockets, blocking or otherwise. You've simply hit the limit of how much data you can feed it.
Non-blocking sockets are only useful if you want one thread to service multiple connections. Using non-blocking sockets and a poll/select loop means that your thread does not sit idle while waiting for new connections.
In your case this is not an issue since there is only one connection per thread so there is no issue if your thread is waiting for input.
Which leads to your original questions of why things slow down when you have more connections. Without further information, the most likely culprit is that you are network limited: ie your network cannot feed your server data fast enough.
If you are interested in non-blocking sockets on Windows do a search on MSDN for OVERLAPPED APIs
You could be running into other threading related issues, like deadlocks/race conditions/false sharing, which could be destroying the performance.
One thing to keep in mind is that although you have one thread per client, Windows will not automatically ensure they all run on different cores. If some of them are running on the same core, it is possible (although unlikely) to have your server in a sort of CPU-starved state, with some cores at 100% load and others idle. There are simply no guarantees as to how the OS spreads the load (in the default case).
To explicitly assign threads to particular cores, you can use SetThreadAffinityMask. It may or may not be worth having a bit of a play around with this to see if it helps.
On the other hand, this may not have anything to do with it at all. YMMV.