I am working on a module which uses 10 queues to handle threads and each of them send curl requests using curl_easy interface (along with Lock) so that; a single connection is maintained till the response is not received. I want to enhance request handling by using curl_multi interface where curl requests are sent by the thread and handled in parallel fashion.
I have created a separate code to implement it. I created 3 threads for instance, being handled one by one, the first thread sends request to curl_multi till it's running and there are transfers existing, which allocates resources using curl_easy interface for each transfer.
I have gone through a lot of examples but cannot figure out how to implement it in C++. Also because I have recently learnt multi threading and curl concepts in C++ I need assistance with the approach.
I expect a single thread should be able to send curl requests till the user doesn't stop sending.
Update - I have added two threads and each sends two requests simultaneously. curl_multi is being handled by an array of handles, for curl_easy.
I want to keep it free of arrays because that is limiting the number of requests.
Can it be made asynchronous and accept all transfers and exit only when the client/user does. There are enough examples of curl_multi therefore I am not clear of its implementation.
Reading the curl_multidocumentation, it doesn't seem as you have to create different threads for this, as it works via your multiple easy handles added to the multi handle object. You then call curl_multi_perform to start all transfers in a non-blocking way.
I expect a single thread should be able to send curl requests till the user doesn't stop sending.
I don't understand what you mean by this, do you mean that you just want to keep those connections alive until everything is transferred? If so, curl_multi already gives you info on the progress of your transfers which can help you determine what to do.
Hope it helps
Related
I need to implement a server to which multiple clients can send requests simultaneously. The code processing an individual request might block (with the thread going to sleep) in the middle.
At the moment I am using the C++ GRPC synchronised server. Each time a client sends a request, a new thread is spawned on the server's side. This is a problem since a server can create too many threads simultaneously.
I am considering two solutions to avoid the problem:
1) Use the sync server with the ResourceQuota (e.g. restrict the max number of threads to 10).
2) Use the async server.
Implementing the second solution is considerably more difficult than implementing the first solution. What advantage (if any) the second solution would give compared to the first one? Which solution would give better results in terms of:
The amount of time an individual client needs to wait to get a response to RPC
The resources (memory, threads) used on the server.
I have a boost asio application with many threads, similar to a web server, handling hundreds of concurrent requests. Every request will need to make calls to both memcached and redis (via libmemcached and redispp respectively). Is the best practice in this situation to make a separate connection to both redis and memcached from each thread (effectively tripling the open sockets on the server, three per request)? Or is there a way for me to build a static object, with a single memcached/redis connection, and allow all threads to share that single connection? I'm a bit confused when it comes to the thread safety of something like this, and everything needs to be asynchronous between the threads, but blocking for each thread's individual request (so each thread has a linear progression, but many threads can be in different places in their own progression at any given time). Does that make sense?
Thanks so much!
Since memcached have syncronous protocol you should not write next request before you got answer to prevous. So, no other thread can chat in same memcached connection. I'd prefer to make thread-local connection if you work with it in "blocking" mode.
Or you can make it work in "async" manner: make pool of connections, pick a connection from it (and lock it). After request is done, return it to pool.
Also, you can make a request queue and process it in special thread(s) (using multigets and callbacks).
I have a server interacting with multiple clients where the client send messages to the server and the server reads them via recv() method. The problem I getting is that Im using waitforsingleobject(handler, 10000 millisecs) in order to make the server wait for a few seconds to interact with one client and then let others access to it but then I start seeing answer from the server with the wrong message to the client and getting blocked. So looks like a synchronization issue.
So my question is (since I'm a begginer in c++) how could I ensure that every incoming message is received and replied to the right client, allowing all the clients interact with the server.
There're two alternatives.
First is a pretty standard model - one thread per one client. When a client connects, you start a thread to handle it.
Second approach doesn't require many threads. You should use WSARecv() on an overlapped socket instead of recv(). This way, you can simultaneously open multiple receive operations, one per client, and wait them all in a WaitForMultipleObjects(). To be specific, you will wait on event inside WSAOVERLAPPED. Remember that WaitForMultipleObjects() has a limit on number of wait objects. When exceeded, you will need to run another thread. The return code from WaitForMultipleObjects() will tell you which client has sent data, so you can reply to it.
Or, as suggested above, you could probably use select() to figure out which socket has data.
I like to know the server (TCP based) architecture to support large scale of clients(at least10K) to implement Fix server. My points are
How we design it.
How to listen on the open port? Use select or poll or any other function.
How to process the response of the client? On large scale we cannot create the one thread for each client.
Should the processing of response is in the different executable and share the request and response to the server executable through IPC.
There is much more on it. I would appreciate if anyone explains it or provide any link.
Thanks
An excellent resource for information on this topic is The C10K problem. Although the dimensions there seem a little old, the techniques are still applicable today.
The architecture depends on what you want to do with the clients incoming data. My guess is that for every incoming message you would perform some computations and probably also return a response.
In that case I would create 1 main listener thread that receives all the incoming messages (Actually, if your hardware has more than 1 physical network device, I would use a listener thread per device and make sure each one is listening to a specific device).
Get the number of CPUs that you have on your machine and create worker threads for each CPU and bind them each thread to one cpu (Maybe number of working thread should be num_of_cpu-1, to leave an availalbe cpu for the listener and dispatcher).
Each thread has a queue and semaphore, the main listener thread just push the incoming data into those queues. There are many way to perform load balancing (Will talk about it later).
Each working thread just works on the requests given to it, and put the response on another queue that is read by the dispatcher.
The dispatcher - there are 2 options here, use a thread for dispatcher (or thread per network device as for listeners), or have the dispatcher actually be the same thread as the listener.
There is some advantage to put them both on the same thread, since it makes it easier to detect lost socket connection and use the same fds for both reading and writing without thread synchronization. However, it could be that using 2 different threads would give better performance, it need to be tested.
Note about load balancing:
This is a topic of its own.
The simplest thing is to use 1 queue for all working threads, but the problem is that they have to lock in order to pop items and the locking can damage performance. (But you get the most balanced load).
Another quite simple approach would be to have a private queue for every worker and perform round-robin when inserting. After every X cycles check the size of all the queues. If some queues are much larger than others then leave them out for the next X cycles and then recheck them again. This is not the best approach, but a simple one to implement and gives some load balancing while no locking is needed.
By the way - There is a way to implement queue between 2 threads without blocking - but this is also another topic.
I hope it helps,
Guy
If the client and server are on a secure network then the security aspect is to be minimal - to the extent that the transfers are encrypted. If the clients and the server are not on a secure network - you first want the server and client to authenticate each other and then initiate encrypted data transfer. For data transfer, server-side authentication should suffice. At the end of this authentication use the session key to generate encrypted data stream (symmetric). consider using TFTP it is simple to implement and scales reasonably well.
I already wrote here about the http chat server I want to create: Alternative http port?
This http server should stream text to every user in the same chat room on the website. The browser will stay connected and wait for further html code. (yes that works, the browser won't reject the connection).
I got a new question: Because this chat server doesn't need to receive information from the client, it's not necessary to listen to the client after the server sent its first response. New chat messages will be send to the server on a new connection.
So I can open 2 threads, one waiting for new clients (or new messages) and one for the html streaming.
Is this a good idea or should I use one thread per client? I don't think it's good to have one thread/client when there are many chat users online, since the server should handle multiple different chats with their own rooms.
3 posibilities:
1. One thread for all clients, send text to each client successive - there shouldn't be much lag since it's only text
this will be like: user1.send("text");user2.send("text"),...
2. One thread per chat or chatroom
3. One thread per chat user - ... many...
Thank you, I haven't done much with sockets yet ;).
Right now, you seem to be thinking in terms of a given thread always carrying out a given (type of) task. While that basic design can make sense, to produce a scalable server like this, it generally doesn't work very well.
Often a slightly more abstract viewpoint works out better: you have tasks that need to get done, and threads that do those tasks -- but a thread doesn't really "care" about what task it executes.
With this viewpoint, you simply need to create some sort of data structure that describes each task that needs to be done. When you have a task you want done, you fill in a data structure to describe the task, and hand it off to get done. Somewhere, there are some threads that do the tasks.
In this case, the exact number of threads becomes mostly irrelevant -- it's something you can (and do) adjust to fit the number of CPU cores available, the type of tasks, and so on, not something that affects the basic design of the program.
I think easiest pattern for this simple app is to have pool of threads and then for each client pick available thread or make it wait until one becomes available.
If you want serious understanding of http server architecture concepts google following:
apache architecture
nginx architecture