Strategy for Asynchronous database access with Qt5 SQL - c++

I need to create a server in Qt C++ with QTcpServer which can handle so many requests at the same time. nearly more than 1000 connections and all these connection will constantly need to use database which is MariaDB.
Before it can be deployed on main servers, It needs be able to handle 1000 connections with each connection Querying data as fast it can on 4 core 1 Ghz CPU with 2GB RAM Ubuntu virtual machine running on cloud. MySQL database is hosted on some other server which more powerful
So how can I implement this ? after googling around, I've come up with following options
1. Create A new QThread for each SQL Query
2. Use QThreadPool for new SQL Query
For the fist one, it might will create so many Threads and it might slow down system cause of so many context switches.
For second one,after pool becomes full, Other connections have to wait while MariaDB is doing its work. So what is the best strategy ?

Sorry for bad english.
1) Exclude.
2) Exclude.
3) Here first always doing work qt. Yes, connections (tasks for connections) have to wait for available threads, but you easy can add 10000 tasks to qt threadpool. If you want, configure max number of threads in pool, timeouts for tasks and other. Ofcourse your must sync shared data of different threads with semaphore/futex/mutex and/or atomics.
Mysql (maria) it's server, and this server can accept many connections same time. This behaviour equally what you want for your qt application. And mysql it's just backend with data for your application.
So your application it's server. For simple, you must listen socket for new connections and save this clients connections to vector/array and work with each client connection. Always when you need something (get data from mysql backend for client (yeah, with new, separated for each client, onced lazy connection to mysql), read/write data from/to client, close connection, etc.) - you create new task and add this task to threadpool.
This is very simple explanation but hope i'm helped you.

Consider for my.cnf [mysqld] section
thread_handling=pool-of-threads
Good luck.

Related

Connecting to remote services from multiple threaded requests

I have a boost asio application with many threads, similar to a web server, handling hundreds of concurrent requests. Every request will need to make calls to both memcached and redis (via libmemcached and redispp respectively). Is the best practice in this situation to make a separate connection to both redis and memcached from each thread (effectively tripling the open sockets on the server, three per request)? Or is there a way for me to build a static object, with a single memcached/redis connection, and allow all threads to share that single connection? I'm a bit confused when it comes to the thread safety of something like this, and everything needs to be asynchronous between the threads, but blocking for each thread's individual request (so each thread has a linear progression, but many threads can be in different places in their own progression at any given time). Does that make sense?
Thanks so much!
Since memcached have syncronous protocol you should not write next request before you got answer to prevous. So, no other thread can chat in same memcached connection. I'd prefer to make thread-local connection if you work with it in "blocking" mode.
Or you can make it work in "async" manner: make pool of connections, pick a connection from it (and lock it). After request is done, return it to pool.
Also, you can make a request queue and process it in special thread(s) (using multigets and callbacks).

Best strategy to reduce DB connections in a multithreading application using the Qt framework

I have a server that communicates to a lot of devices (>1000). Each connection has its own thread. Now, I realized that I would have to set my mysql config to allow >1000 open concurrent connections what seems to be a very bad idea in my opinion.
Qt docs say that every thread needs its own connection: http://qt-project.org/doc/qt-4.8/threads-modules.html#threads-and-the-sql-module
So, I have to call
QSqlDatabase::addDatabase("QMYSQL", "thread specific string");
in every thread.
What is the best practice here?
I would think some sort of resource pooling would be appropriate here.
Depending on the database workload from the >1000 device threads a single database thread could maybe manage it or then you will need several database threads.
Then setup a queuing system from the device threads to the database thread(s), where the devices push the work and the database thread(s) pulls work units off and perform the query.
I just realised that I was thinking of writing to database only like some sort of logging, and this idea may not work without modification if what you are doing is reading from the database and writing to devices.
Honestly speaking I don't know much about QT but if I take your problem in general then I would advise you to create a "Connection Pool"
If you don't want or can't implement a Connection Pool then its fine to increase Max_Connections in MySQL configuration and leave the pooling on MySQL, it has its own Connection Pooling mechanism.

C++: Client-Server Distributed Processing Idea, close connection after data is sent, re open when task is finished?

So, I'm trying to think of the best way to have a distributed computing client-server architecture that allows for the most possible clients without being too hard on the server.
*Note, I'm using the boost library, though I haven't started any client / server code yet.
I think I want to open a TCP connection from a client to the server, saying "hey I'll do some work for ya", they server sends a task and data for that task during that connection, then closes that connection so that the server doesn't have a ton of open socket threads. When the client finishes processing, it would re-connect to the server and send the completed data (closing the connection again if there are no further tasks to be done).
Is this a good idea? What is the best way to go about doing this?
It's possible that the server would need to manage up to 256 clients (biggest case).
Have you looked at Amazon's MapReduce service? It is built for exactly this sort of thing.
http://aws.amazon.com/elasticmapreduce/
It is massively scalable, and will handle just about any job you can throw at it.
EDIT: If you want an open solution, I suggest looking at Apache Hadoop and its MapReduce offering. Also, you can check out OpenStack to host your own cloud infrastructure, if that seems advantageous for your application.
http://www.openstack.org/
I would suggest as an option to explore the BOINC software which is open-source and is precisely designed with this in mind. This assumes that the task will not need to communicate to other clients, just with the server
This is the software used by seti#home, einstein#home, folding#home and almost anything #home!

Why with PHP hundreds of DB connections dont matter..but in C++ app they do?

In most web (PHP) apps, there is mysql_connect and some DB actions which means that if 1000 users is connected, 1000 connections are opened?
But with C++ app it is incredibly slow...what is the main difference?
Thanks
PHP will automatically close the DB connections when the script terminates (unless you use persistent connections or have closed the connection yourself before the script terminates of course). In your C++ app, this will depend on how you actually handle connections. But I can imagine you will want to keep your connections open for a longer stretch of time in the C++ app, and thus you could hit the maximum number of concurrent users sooner.
You could also tweak some of the MySQL settings if you have performance issues.
But how are you accessing MySQL from your C++ app? Not using ODBC are you?

C++ MySQL and multithreading - 1 DB connection per user?

Is in multithreaded application suitable to have 1 connection per 1 connected client? To me it seems ineffective but if there is not connection pooling, how it could be done when one wants to let each connection communicate with DB?
Thanks
If you decide to share a connection amongst threads, you need to be sure that one thread completely finishes with the connection before another uses it (use a mutex, semaphore, or critical section to protect the connections). Alternately, you could write your own connection pool. This is not as hard as it sounds ... make 10 connections (or however big your pool needs to be) on startup and allocate/deallocate them on demand. Again protecting with mutex/cs/sema.
That depends on your architecture.
It sounds like you're using a server->distributed client model? In that case I would implement some sort of a layer for DB access, and hide connection pooling, etc. behind a data access facade.