If I want my application to connect to potentially hundreds of different databases, what effect will this have on database pools?
I'm assuming django has database connection pooling, and if I am connection to 1000 different databases, that will result in allot of memory used up in connection pools no?
Django does not have database connection pooling, it leaves this up to other tools for that purpose (for example pg-bouncer or pg-pool for PostgreSQL). So there is no worry with the number of database you're connecting to in terms of keeping those connections open and using up a bunch of RAM.
Related
I need to create a server in Qt C++ with QTcpServer which can handle so many requests at the same time. nearly more than 1000 connections and all these connection will constantly need to use database which is MariaDB.
Before it can be deployed on main servers, It needs be able to handle 1000 connections with each connection Querying data as fast it can on 4 core 1 Ghz CPU with 2GB RAM Ubuntu virtual machine running on cloud. MySQL database is hosted on some other server which more powerful
So how can I implement this ? after googling around, I've come up with following options
1. Create A new QThread for each SQL Query
2. Use QThreadPool for new SQL Query
For the fist one, it might will create so many Threads and it might slow down system cause of so many context switches.
For second one,after pool becomes full, Other connections have to wait while MariaDB is doing its work. So what is the best strategy ?
Sorry for bad english.
1) Exclude.
2) Exclude.
3) Here first always doing work qt. Yes, connections (tasks for connections) have to wait for available threads, but you easy can add 10000 tasks to qt threadpool. If you want, configure max number of threads in pool, timeouts for tasks and other. Ofcourse your must sync shared data of different threads with semaphore/futex/mutex and/or atomics.
Mysql (maria) it's server, and this server can accept many connections same time. This behaviour equally what you want for your qt application. And mysql it's just backend with data for your application.
So your application it's server. For simple, you must listen socket for new connections and save this clients connections to vector/array and work with each client connection. Always when you need something (get data from mysql backend for client (yeah, with new, separated for each client, onced lazy connection to mysql), read/write data from/to client, close connection, etc.) - you create new task and add this task to threadpool.
This is very simple explanation but hope i'm helped you.
Consider for my.cnf [mysqld] section
thread_handling=pool-of-threads
Good luck.
I have an application interacts with Access database using DAO class, recently I converted the database to a sqlite database.
I do not know which connection method is better for the design as following:
Create only one database connection using a public variable when open the application, any queries use the only connection object for interaction during the run time, the connection is then closed when close the application
Create database connection every time before running a query, then close the database connection instantly after loading the resultset to the memory.
I recommend that you encapsulate your db access, so that the decision on whether to keep a persistent connection or not open can be changed at a later point.
Since you are using SqlLite I am assuming that it is a single user DB, so concurrency , connection contention, locking etc. are not likely to be issues.
Typically the main reasons to reuse short running connections is usually on a multi user web or service oriented system, where scalability and licensing considerations are important. This doesn't seem to be applicable in your case.
.
In short, there doesn't seem any reason not to keep a connection open for the entire duration of your app / user's login session based on the above assumptions.
If you use transactions however, I would suggest that you commit these after each successful atomic activity
You know your two options have + and -. For your special case I think to create database connection every time is not so bad idea, because creating connection to sqlite is very fast and no time consuming. Also this way you may create/close more than one connection at once, which is a good benefit, maybe you don't do it now, but in the future maybe you will have to.
I have an application server. At a high level, this application server has users and groups. Users are part of one or more groups, and the server keeps all users aware of the state of their groups and other users in their groups. There are three major functions:
Updating and broadcasting meta-data relating to users and their groups; for example, a user logs in and the server updates this user's status and broadcasts it to all online users in this user's groups.
Acting as a proxy between two or more users; the client takes advantage of peer-to-peer transfer, but in the case that two users are unable to directly connect to each other, the server will act as a proxy between them.
Storing data for offline users; if a client needs to send some data to a user who isn't online, the server will store that data for a period of time and then send it when the user next comes online.
I'm trying to modify this application to allow it to be distributed across multiple servers, not necessarily all on the same local network. However, I have a requirement that backwards compatibility with old clients cannot be broken; essentially, the distribution needs to be transparent to the client.
The biggest problem I'm having is handling the case of a user connected to Server A making an update that needs to be broadcast to a user on Server B.
By extension, an even bigger problem is when a user on Server A needs the server to act as a proxy between them and a user on Server B.
My initial idea was to try to assign each user a preferred server, using some algorithm that takes which users they need to communicate with into account. This could reduce the number of users who may need to communicate with users on other servers.
However, this only minimizes how often users on different servers will need to communicate. I still have the problem of achieving the communication between users on different servers.
The only solution I could come up with for this is having the servers connect to each other, when they need to deal with a user connected to a different server.
For example, if I'm connected to Server A and I need a proxy with another user connected to Server B, I would ask Server A for a proxy connection to this user. Server A would see that the other user is connected to Server B, so it would make a 'relay' connection to Server B. This connection would just forward my requests to Server B and the responses to me.
The problem with this is that it would increase bandwidth usage, which is already extremely high. Unfortunately, I don't see any other solution.
Are there any well known or better solutions to this problem? It doesn't seem like it's very common for a distributed system to have the requirement of communication between users on different servers.
I don't know how much flexibility you have in modifying the existing server. The way I did this a long time ago was to have all the servers keep a TCP connection open to each other. I used a UDP broadcast which told the other servers about each other and allowed them to connect to new servers and remove servers that stopped sending the broadcast.
Then everytime a user connects to a server that server Unicasts a TCP message to all the servers it is connected to, and all the servers keeps a list of users and what server they are on.
Then as you suggest if you get a message from one user to another user on another server you have to relay that to the other server. The servers really need to be on the same LAN for this to work well.
You can run the server to server communications in a thread, and actually simulate the user being on the same server.
However maintaining the user lists and sending messages is prone to race conditions (like a user drops off while you are relaying the message from one server to another etc).
Maintaining the server code was a nightmare and this is really not the most efficient way to implement scalable servers. But if you have to use the legacy server code base then you really do not have too many options.
If you can look into using a language that supports remote processes and nodes like Erlang.
An alternative might be to use a message queue system like RabbitMQ or ActiveMQ, and have the servers talk to each other through that. Those system are designed to be scalable, and usually work off a Publish/Subscribe mechanism.
In most web (PHP) apps, there is mysql_connect and some DB actions which means that if 1000 users is connected, 1000 connections are opened?
But with C++ app it is incredibly slow...what is the main difference?
Thanks
PHP will automatically close the DB connections when the script terminates (unless you use persistent connections or have closed the connection yourself before the script terminates of course). In your C++ app, this will depend on how you actually handle connections. But I can imagine you will want to keep your connections open for a longer stretch of time in the C++ app, and thus you could hit the maximum number of concurrent users sooner.
You could also tweak some of the MySQL settings if you have performance issues.
But how are you accessing MySQL from your C++ app? Not using ODBC are you?
I'm building an application which uses Mysql, I was wondering what would be the best way to manage the connection to the actual Mysql server?
I'm still in the design phase, but currently I have it Connecting (or aborting if error) before every query and disconnecting after which is just for testing as right now I'm only running 1 query to see if the code I've setup so far works.
My App might be performing a few queries every 5/10/20/30 minutes depending on settings and doesn't really need to do anything with SQL until that time.
So I'm wondering if its more beneficial to use a continuous connection that exists for the lifetime of the application (if possible) or to simply connect to sql before I intend to use it, do what the app needs to do then disconnect?
Connecting once and performing many queries will naturally be more efficient.
However, if performance isn't a major concern for your project, maybe aiming for simplicity in your code might be a better option (especially if you are the only connection to the database).
If you want to get clever, then maybe connect as and when you need to, then keep the connection alive until you stop making queries. Eg, drop the connection if there have been no queries for 30 seconds or something like that.
How many instances of this app will be connecting to MySQL? If it's just one, keeping a MySQL connection open for convenience shouldn't cause any problems, but remember there's a (configurable) limit to the number of MySQL connections you can have open to the server. In this case, I would recommend opening a connection, running whatever queries you need to run, and then closing it. Connecting per query adds more overhead as you add queries to your application.