How to start MySQL transaction that will be committed by mysql_commit() - c++

I'm writing a C++ application that uses MySQL C API to connect to the database. MySQL server version is 5.6.19-log.
I need to run several SQL UPDATE, INSERT and DELETE statements in one transaction to make sure that either all changes or no changes are applied.
I found in the docs functions mysql_commit() and mysql_rollback() that finish the transaction (commit it or roll it back), but I can't find a corresponding function that starts a transaction.
Is there such a function? Am I missing something obvious?
I run UPDATE, INSERT and DELETE statements using mysql_real_query() function.
I guess I should be able to start the transaction by running START TRANSACTION SQL statement using same mysql_real_query() function. Then I should be able to commit the transaction by running COMMIT SQL statement using same mysql_real_query() function.
But then, what is the point of having dedicated mysql_commit() and mysql_rollback() functions in the API?

It looks like MySQL C API indeed doesn't have a dedicated function that is equivalent of the START TRANSACTION SQL statement.
The MySQL C API has mysql_commit() function that does the same as COMMIT SQL statement.
The MySQL C API has mysql_rollback() function that does the same as ROLLBACK SQL statement.
But, there is no function for starting the transaction in this API.

//connect to mysql server:
MYSQL *mysql = mysql_init(NULL);
mysql = mysql_real_connect(mysql, ......)
//turn off auto_commit
mysql_autocommit(mysql , false);
OR
//start a tranaction directly as follows
mysql_real_query(mysql,"BEGIN");
//run your commands:
mysql_real_query(mysql,"UPDATE...");
//commit your transaction
mysql_real_query(mysql, "COMMIT");

Related

SQLite with in-memory and isolation

I want to create an in-memory SQLite DB. I would like to make two connections to this in-memory DB, one to make modifications and the other to read the DB. The modifier connection would open a transaction and continue to make modifications to the DB until a specific event occurs, at which point it would commit the transaction. The other connection would run SELECT queries reading the DB. I do not want the changes that are being made by the modifier connection to be visible to the reader connection until the modifier has committed (the specified event has occurred). I would like to isolate the reader's connection to the writer's connection.
I am writing my application in C++. I have tried opening two connections like the following:
int rc1 = sqlite3_open_v2("file:db1?mode=memory", pModifyDb, SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE | SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_URI, NULL);
int rc2 = sqlite3_open_v2("file:db1?mode=memory", pReaderDb, SQLITE_OPEN_READONLY | SQLITE_OPEN_FULLMUTEX | SQLITE_OPEN_URI, NULL);
I have created a table, added some rows and committed the transaction to the DB using 'pModifyDb'. When I try to retrieve the values using the second connection 'pReaderDb' by calling sqlite3_exec(), I receive a return code of 1 (SQLITE_ERROR).
I've tried specifying the URI as "file:db1?mode=memory&cache=shared". I am not sure if the 'cache=shared' option would preserve isolation anymore. But that did not work either when the reader connection is trying to exec a SELECT query the return code was 6 (SQLITE_LOCKED). Maybe because the shared cache option unified both the connections under the hood?
If I remove the in-memory requirement from the URI, by using "file:db1" instead, everything works fine. I do not want to use file-based DB as I require high throughput and the size of the DB won't be very large (~10MB).
So I would like to know how to set up two isolated connections to a single SQLite in-memory DB?
Thanks in advance,
kris
This is not possible with an in-memory DB.
You have to use a database file.
To speed it up, put it on a RAM disk (if possible), and disable synchronous writes (PRAGMA synchronous=off) in every connection.
To allow a reader and a writer at the same time, you have to put the DB file into WAL mode.
This is seems possible since version 3.7.13 (2012-06-11):
Enabling shared-cache for an in-memory database allows two or more database connections in the same process to have access to the same in-memory database. An in-memory database in shared cache is automatically deleted and memory is reclaimed when the last connection to that database closes.
Docs

mysql lost connection error

Currently, I am working on a project to integrate mysql with the IOCP server to collect sensor data and verify the collected data from the client.
However, there is a situation where mysql misses a connection.
The query itself is a simple query that inserts a single row of records or gets the average value between date intervals.
The data of each sensor flows into the DB at the same time every 5 seconds. When the messages of the sensors come on occasionally or overlap with the message of the client, the connection is disconnected.
lost connection to mysql server during query
In relation to throwing the above message
max_allowed_packet Numbers changed.
interactive_timeout, net_read_timeout, net_write_timeout, wait_timeout
It seems that if there are overlapping queries, an error occurs.
Please let me know if you know the solution.
I had a similar issue in a MySQL server with very simple queries where the number of concurrent queries were high. I had to disable the query cache to solve the issue. You could try disabling the query cache using following statements.
SET GLOBAL query_cache_size = 0;
SET GLOBAL query_cache_type = 0;
Please note that a server restart will enable the query cache again. Please put the configuration in MySQL configuration file if you need to have it preserved.
Can you run below command and check the current timeouts?
SHOW VARIABLES LIKE '%timeout';
You can change the timeout, if needed -
SET GLOBAL <timeout_variable>=<value>;

Avoiding a BEGIN TRANSACTION error on a read-only Sybase connection

I want to use python for a 32-bit odbc connection to a read-only db.
I've installed
C:\py_comp>c:\Python27\Scripts\pip.exe install pyodbc
Collecting pyodbc
Downloading pyodbc-4.0.19-cp27-cp27m-win32.whl (50kB)
100% |################################| 51kB 1.1MB/s
Installing collected packages: pyodbc
Successfully installed pyodbc-4.0.19
and opened a read only connection
conn = pyodbc.connect(r'uid=xxxx;pwd=xxxx;DRIVER={Adaptive Server Enterprise};port=1234;server=11.222.333.444;CHARSET=iso_1;db=my_db;readonly=True')
and all the above is working fine but when I try to do a select
cursor.execute("SELECT something FROM atable")
I get an error because it issues a begin transaction but it shouldn't do that.
pyodbc.Error: ('ZZZZZ', u"[ZZZZZ] [Sybase][ODBC Driver][Adaptive Server Enterprise]Attempt to BEGIN TRANSACTION in database 'my_db' failed because database is READ ONLY.\n (3906) (SQLExecDirectW)")
Python's DB API 2.0 specifies that, by default, connections should be opened with "autocommit" off so operations on the database are performed within a transaction that must be explicitly committed (or rolled back). This can result in a BEGIN TRANSACTION being sent to the database before each batch of Cursor#execute calls, and that can cause problems with operations that cannot be performed within a transaction (e.g., some DDL statements).
pyodbc allows us to enable "autocommit" on a connection, either by adding it as an argument to the connect call
cnxn = pyodbc.connect(conn_str, autocommit=True)
or by enabling it after the connection is made
cnxn = pyodbc.connect(conn_str)
cnxn.autocommit = True
That can avoid such problems by suppressing the automatic BEGIN TRANSACTION before executing an SQL command.

Alternative to KILL 'SID' on Azure SQL Data Warehouse

If I submit a series of SQL statements (each with GO in sqlcmd) that I want to make an reasonable attempt to run on an Azure SQL Data Warehouse, I've found in sqlcmd how to ignore errors. But I've seen if I want to abort a statement in that sequence of statements with:
KILL "SIDxxxxxxx";
The whole session ends:
Msg 111202, Level 16, State 1, Server adws_database, Line 1
111202;Query QIDyyyyyyyyyy has been cancelled.
Is there a way to not end a query session in Azure SQL Data Warehouse? Similar to how postgres's
pg_cancel_backend()
works?
In postgres the
pg_terminate_backed(<pid>)
seems to be working similarly to the ADW
KILL 'SIDxxxx'
command.
Yes, a client can cancel a running request without aborting the whole session. In SSMS this is what the red square does during query execution.
Sqlcmd doesn't expose any way to cancel a running request, though. Other client interfaces do, like the .NET SqlClient you can use SqlCommand.Cancel()
David

New connection every time I execute a query?

I am using the MySQL C++ connector.
It shows how to make a simple connection like this:
sql::Driver* driver = get_driver_instance();
std::auto_ptr<sql::Connection> con(driver->connect(url, user, pass));
con->setSchema(database);
std::auto_ptr<sql::Statement> stmt(con->createStatement());
...
What I am wondering is. Should I do this every time I want to execute something? Should I make a new connection object, execute my query, do what I want with the results, then dispose of the connection and all and repeat when I need to execute something else? What should the scope of a connection object be?
It is for a game server, the server will do login, logout, sessions, record stats, chat logs, etc.
At any given time, the game server only needs 1 connection as it runs on a single thread.
Thanks
Basically, you can get the driver instance and create only one connection when your application starts up,
and keep them in any variables(e.g., a singleton variable of your own style),
and use the connection variable whenever you execute queries.
Caution: Be careful the DB connection not to be timed out.