I have been searching for a while on how to get the generated auto-increment ID from an "INSERT . INTO ... (...) VALUES (...)". Even on stackoverflow, I only find the answer of using a "SELECT LAST_INSERT_ID()" in a subsequent query. I find this solution unsatisfactory for a number of reasons:
1) This will effectively double the queries sent to the database, especially since it is mostly handling inserts.
2) What will happen if more than one thread access the database at the same time? What if more than one application accesses the database at the same time? It seems to me the values are bound to become erroneous.
It's hard for me to believe that the MySQL C++ Connector wouldn't offer the feature that the Java Connector as well as the PHP Connector offer.
An example taken from http://forums.mysql.com/read.php?167,294960,295250
sql::Statement* stmt = conn->createStatement();
sql::ResultSet* res = stmt->executeQuery("SELECT ##identity AS id");
res->next();
my_ulong retVal = res->getInt64("id");
In nutshell, if your ID column is not an auto_increment column then you can as well use
SELECT ##identity AS id
EDIT:
Not sure what do you mean by second query/round trip. First I thought you are trying to know a different way to get the ID of the last inserted row but it looks like you are more interested in knowing whether you can save the round trip or not?
If that's the case, then I am completely agree with #WhozCraig; you can punch in both your queries in a single statement like inser into tab value ....;select last_inserted_id() which will be a single call
OR
you can have stored procedure like below to do the same and save the round trip
create procedure myproc
as
begin
insert into mytab values ...;
select last_inserted_id();
end
Let me know if this is not what you are trying to achieve.
Related
I'm using Pentaho PDI 7.1. I'm trying to convert data from Mysql to Mysql changing the structure of data.
I'm reading the source table (customers) and for each row I've to run another query to calculate the balance.
I was trying to use Database value lookup to accomplish it but maybe is not the best way.
I've to run a query like this to get the balance:
SELECT
SUM(
CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END
)
FROM Movimento WHERE contoFidelizzato_id = ?
I should set the parameter taking it from the previous step. Some advice?
The Database lookup value may be a good idea, especially if you are used to database reasoning, but it may result in many queries which may not be the most efficient.
A more PDI-ish style would be to make the query like:
SELECT contoFidelizzato_id
, SUM(CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END)
FROM Movimento
GROUP BY contoFidelizzato_id
and use it as the info source of a Lookup Stream Step, like this:
An even more PDI-ish style would be to divert the source table (customer) in two flows : one in which you keep the source rows, and one that you group by contoFidelizzato_id. Of course, you need a formula, or a Javascript, or to put a formula in the SQL of the Table input to change the sign when needed.
Test to know which strategy is better in your case. You'll soon discover that the PDI is very good at handling large data.
I'm creating a database in SQLite as follows:
QSqlQuery create_address;
create_address.prepare("CREATE TABLE addresses (addressid INTEGER PRIMARY KEY AUTOINCREMENT, address TEXT UNIQUE)");
QSqlQuery create_devices;
create_devices.prepare("CREATE TABLE devices (ch TEXT PRIMARY KEY, addressid INTEGER REFERENCES addresses(addressid))");
create_devices.exec();
create_address.exec();
I need to access this database a lot of times (~660'000) passing ch and retrieving the corresponding address, the ch passed could not be in the database (empty string is returned).
To do so i made the following query
//outside loop
QSqlQuery find_address;
find_address.prepare("SELECT address FROM addresses,devices WHERE devices.addressid = addresses.addressid AND devices.ch = :chcode");
//in loop
find_address.bindValue(":chcode",QString::fromStdString(ch_code));
find_address.exec();
The problem is that this process is very slow (it takes almost 12 minutes to finish all the 660'000 searches).
Before this i tried with an INNER JOIN but the performance was pretty much the same.
Is there a better way to write the query and/or structure the DB to get a faster execution time?
Since you have a loop with SQL query, you can wrap it into transaction which may improve the performance:
QSqlDatabase::database().transaction();
.........
// your loop
.........
QSqlDatabase::database().commit();
Also the performance may be improved by adding indexes. In your case index can be created on the fields devices.addressid and devices.ch. In sqlite console do the following:
CREATE INDEX devices_index ON devices(ch, addressid);
Without any measurement or insights how the tables addresses and devices look like it's hard to give a precise advice.
Maybe the join is the bottleneck so you can try to create a view first. This would avoid joining two tables 659000 times. See here on how to create a view.
Next (shot in the dark), instead of executing 660000 queries, make batches. For example replace AND devices.ch = :chcode by AND devices.ch IN(:chcodelist) and glue multiple chcodes together. Depending on the content, take care of their escaping yourself.
I´m working for the first time with OCI so this may be a basic question.... I´m coming from MySql word.... Using VS2012 with C++.
I wish to do a simple SELECT statement with some variations on WHERE and LIMIT clause. The SQL query is build dynamically from a C++ written processor and the statement comes ready from this module. So I may have something like:
SELECT * FROM MYTABLE3; or
SELECT F1, F2, F3 FROM MYTABLE1; or even
SELECT F1, F3, F4 FROM MYTABLE2 WHERE ID > 10;
No big deal here.
My problem is that I DON´T KNOW IN ADVANCE THE TABLE FORMAT, so I cannot bind variables to it before executing the statement and fetching the table structure. In MySql that´s easy, because I execute the statement and I get the resultSet. From the resultSet I can check the number of columns retrieved, the name, data format and size of each column. After reading that data I build a dynamic matrix with the table structure and its data, my final goal. Something as:
sql::ResultSetMetaData *resultMeta = resultSet->getMetaData();
while (resultSet->next())
{
for (unsigned int i = 1; i <= resultMeta->getColumnCount(); i++)
{
std::string label = resultMeta->getColumnLabel(i);
std::string type = resultMeta->getColumnTypeName(i);
// ... Get the resultset attributes and data
}
retData.push_back(data);
}
From what I´ve seen in Oracle, I need to bind the variables that are going to be returned before issuing the execute/fetch operations. In my case I cannot do it because I don´t know the table structure in advance...
I´m pretty sure Oracle can do that, I just don´t know how to do it. I´ve read the Oracle Docs and did not find references to it....
Help is very much appreciated and code examples also. I´m stuck with that for 2 days now... Thanks for helping.
Can you please try the following on your statement handle ( stmhp). This will give you column count on your oracle statement.
err = OCIAttrGet ((dvoid *)stmhp, (ub4)OCI_HTYPE_STMT, (dvoid *)
&parmcnt, (ub4 *) 0, (ub4)OCI_ATTR_PARAM_COUNT, errhp);
Please check this link also which will help you to find out data type of every column in the resultset.
Retrieving data type information for columns in an Oracle OCCI ResultSet
Alright I am in a rather difficult situation, or at least I think so anyway. I have been doing some research on how to fix my problem but have really come up empty handed.
I need to be able to reindex the rowid of my table after I delete a row. That way at any given time when I want to update or index a row by the rowid it is accessing the correct one.
Now for those of you asking why. Basically I am interfacing a "homebrewed" db that was programmed in C and is really just a bunch of memory locations all accessed like they were a db table. So what I'm trying to say is they can look up a row by searching for a value in the table, or by simply saying i want row 6. Lastly the table could consist of really anything, and any values which means they dont create a column as an index and ultimately the only thing for me to index their row by row number is the rowid to my knowledge.
So I have found that VACUUM would do what I want or need but it appears that the system that database is in isn't giving sqlite privileges to write so when VACUUM is run it comes back with and error. (ERROR 14 or Unable to open the database file) (I also know that my db is open so that isn't the issue but not having write privileges is the only reason I can come up with) I have also read some stuff about the auto increment or something like that but didn't really understand/think that was going to be able to fix my problem.
Any suggestions or ideas from the sqlite or database geniuses out that would be appreciated.
Not sure if I have understood completely your problem, but if you can use SQL code maybe you can write a query to update the IDs (assuming they are in dense order).
You can use a query like this:
UPDATE t1
SET id = (SELECT rank
FROM (SELECT id,
(
SELECT count()+1
FROM (SELECT DISTINCT id
FROM t1 AS t
WHERE t.id < t1.id
)
) rank
FROM t1
) AS sub
WHERE sub.id = t1.id
);
You can check my demo in SQLFiddler. In this demo you will see the result of the DELETE and UPDATE statements (to simulate your case) if you run all queries together.
I am using C++ and MySQL.
I have data objects I want to persist to the database. They need to have a unique ID for identification purposes. The question is, how to get this unique ID?
Here is what I came up with:
1) Use the auto_increment feature of MySQL. But how to get the ID then? I am aware that MySQL offers this "SELECT LAST_INSERT_ID()" feature, but that would be a race condition, as two objects could be inserted quite fast after each other. Also, there is nothing else that makes the objects discernable. Two objects could be created pretty much at the same time with exactly the same data.
2) Generate the UID on the C++ side. No dice, either. There are multiple programs that will write to and read from the database, who do not know of each other.
3) Insert with MAX(uid)+1 as the uid value. But then, I basically have the same problem as in 1), because we still have the race condition.
Now I am stumped. I am assuming that this problem must be something other people ran into as well, but so far, I did not find any answers.
Any ideas?
The query:
SELECT LAST_INSERT_ID()
will return the last ID inserted on your specific connection, not globally. So there is no race condition, unless your own code is multi-threaded, in which case you would want to surround the INSERT and the SELECT with an MT lock of some sort.