I have 2 servers (One for Database Connections and one or more for client connections) running in a machine. The job of the database connections server is to fetch data from MySQL database and give this to the other servers on request.
Right now the data transfer (Between the 2 servers) happens via Json (json_spirit)(Don't know why I designed it this way.).
I am coming up to a stage where the data that is loaded from the MySQL DB is huge when the server starts up and every 1 minute. And 1000s of smaller queries in between.
I can see the impact Json is having since I have to Parse MYSQL_RES to Json and transmit from the DB Connection server to Client connection server and then parse the Json to a data set.
I am looking to serialize my data or do something other than parsing since the overhead is slowing down the client connection server since it is waiting on the response from DB Connection Server.
What would you suggest to serialize MYSQL_RES struct?
I have read about Protobuf, Flatbuffer and serialization. But simply cannot make a decision.
Related
I have two database one is at client side and one is running at another machine (which i am using a remote database). I have to send a table data from client database to my remote database. But problem is if I use this code:
mysql_query(con, " select * from DB.Table1")
this function only works for Database (client side) and don't connect with my remote database. Its mean only 1 connection is possible at a time. I am doing it in C++ do you have any solution. ?
I am embedding Monetdbe into a multi-threaded C++ application.
I have several threads running on the server-side of my application and each thread opens its own instance of the same Monetdb database, i.e. each thread runs the following code:
monetdbe_database db = NULL;
if (monetdbe_open(&db, url /* inmemory database */, NULL /* no options */)) {
fprintf(stderr, "Failed to open database\n");
return -1;
}
Each thread runs MonetDB queries sent by clients connected to the server. therefore, there can be several clients connected to the db at the same time, and they may send requests to access/update the same underlying tables at the same time.
I just want to make sure that Monetdb has been designed to deal with this scenario.
I understand that Monetdb is not designed to be a high transaction db, and my use case is more analytical, but I do have several clients that will be connected to the server, and sometimes they may run queries against the same db tables at the same time. Is this the correct way to run multi-threaded applications with Monetdbe?
According to this issue on github (https://github.com/MonetDBSolutions/monetdbe-examples/issues/11) and this example (https://github.com/MonetDBSolutions/monetdbe-examples/blob/master/C/concurrent.c).
So long as you aren't using the same monetdbe_database for each connection, you should be okay to go.
I am using QuestDb and posting record with like
influxDB = InfluxDBFactory.connect("http://localhost:9000", username, password);
influxDB.setDatabase(database);
influxDB.enableBatch(BatchOptions.DEFAULTS);
influxDB.write(Point.measurement(TABLE_NAME)
.addField("ID",ir.getid())
.addField(....)
.build());
We do not send timestamp, so QuestDb inserts the server time and running it with one worker thread server option.
Each ID is unique but when I query QuestDb I see sometimes 5 records with the same ID as if QuestDb creates duplicates.
What can be wrong here?
The duplicates have all the same values, except timestamp is roughly 10s apart from one to another.
QuestDb does not support HTTP connection, it support TCP instead.
What probably happens is that you open HTTP connection to the port, it opens underlying TCP socket connection and sends HTTP headers, QuestDb ignores the headers as invalid messages and parses valid Influx protocol lines. Then Influx library does not receives any response and re-sends the message again after some configured interval, and again... Here you got the duplicates
Switch to TCP connection and do not use Influx library to send the messages using something like https://questdb.io/docs/develop/insert-data/ or telegraf, or use UDP
I'm implementing some kind of distributed database. In the nodes there are "Agent"-programs running, which get queries by a "Router" and send them to the local database. After that the results should be send back to the router.
How can I send a MYSQL_RES structure over network? What is the best way to do this? At the moment I just build a protobuf-Object with the data, but the deserialization of that object is quiet slow. And building the object needs a run over all rows.
Is there a possibility to send the mysql-binary-result to the router directly and interpret it on the router? I need the agent, as the resultsets must be examined locally, too. I'm using C/c++.
Regards
If I want my application to connect to potentially hundreds of different databases, what effect will this have on database pools?
I'm assuming django has database connection pooling, and if I am connection to 1000 different databases, that will result in allot of memory used up in connection pools no?
Django does not have database connection pooling, it leaves this up to other tools for that purpose (for example pg-bouncer or pg-pool for PostgreSQL). So there is no worry with the number of database you're connecting to in terms of keeping those connections open and using up a bunch of RAM.