i am using c++ 4.8 ( available 4.9) and pqxx driver ver. 4.0.1.
postgresdb is latest stable.
My problem is all about complexity and resource balance:
I need to execute insert to database (and there is optionally pqxx::result) and id in that table id is based on nextval(table_seq_id)
Is is possible to get id of inserted row as a result? There is a workaround on this to ask db about currentvalue in sequence and just insert query with currentvalue+1 (or +n) but this will require to do "insert and ask" chain.
Db should be able to store more than 6K large requests /per.sec. so i would like to ask about id as infrequent as possible. Bulk insert is not an option.
As documented here, you can add a RETURNING clause to the INSERT query, to return values from the inserted row(s). They give an example similar to what you want, returning an ID:
INSERT INTO distributors (did, dname) VALUES (DEFAULT, 'XYZ Widgets')
RETURNING did;
Related
I've an apparently simple task to perform, i have to convert several tables column from a string to a new entity (integer FOREIGN KEY) value.
I have DB 10 tables with a column called "app_version" which atm are VARCHAR columns type. Since i'm going to have a little project refactor i'd like to convert those VARCHAR columns to a new column which contains an ID representing the newly mapped value so:
V1 -> ID: 1
V2 -> ID: 2
and so on
I've prepared a Doctrine Migration (i'm using symfony 3.4) which performs the conversion by DROPPING the old column and adding the new id column for the AppVersion table.
Of course i need to preserve my current existing data.
I know about preUp and postUp but i can't figure how to do it w/o hitting the DB performance too much. I can collect the data via SELECT in the preUp, store them in some PHP vars to use later on inside postUp to write new values to DB but since i have 10 tables with many rows this become a disaster real fast.
Do you guys have any suggestion i could apply to make this smooth and easy?
Please do not ask why i have to do this refactor now and i didn't setup the DB correctly in the first time. :D
Keywords for ideas: transaction? bulk query? avoid php vars storage? write sql file? everything can be good
I feel dumb but the solution was much more simple, i created a custom migration with all the "ALTER TABLE [table_name] DROP app_version" to be executed AFTER one that simply does:
UPDATE [table_name] SET app_version_id = 1 WHERE app_version = "V1"
I have a table in a database which stores items. Each item has a unique ID, which the DB generates upon insertion (auto-increment).
A user may perform a specific task that will add X items to the database, however my program (C++ server application using MySQL connector) should return the IDs that the database generated right away. For example, if I add 6 items, the server must return 6 new unique IDs to the client.
What is the fastest/cleanest way to do such thing? So far I have been doing INSERT followed by SELECT for each new item OR INSERT followed by last_insert_id, however if there are 50 items to add it will take a few seconds at least which is not good at all for user experience.
sql_task.query("INSERT INTO `ItemDB` (`ItemName`, `Type`, `Time`) VALUES ('%s', '%d', '%d')", strName.c_str(), uiType, uiTime);
Getting the ID:
uint64_t item_id { sql_task.last_id() }; //This calls mysql_insert_id
I believe you need to rethink your design slightly. Let's use the analogy of a sales order. With a sales order (or invoice #) the user gets an invoice number (auto_incr) as well as multiple line item numbers (also auto_inc).
The sales order and all of the line items are selected for insert (from the GUI) and the inserts are performed. First, the sales order row is inserted and its id is saved in a variable for subsequent calls to insert the line items. But the line items are then just inserted without immediate return of their auto_inc id values. The application is merely returned the sales order number in the end. How your app uses that sales order number in subsequent calls is up to you. But it does not need to be immediate to retrieve all the X or 50 rows at once, as it has the sales order number iced and saved somewhere. Let's call that sales order number XYZ.
When you actually need the information, an example call could look like
select lineItemId
from lineItems
where salesOrderNumber=XYZ
order by lineItemId
You need to remember that in a multi-user system that there is no guarantee of receiving a contiguous block of numbers. Nor should it matter to you, as they are all attached appropriately with the correct sales order number.
Again, the above is just an analogy, used for illustration purposes.
That's a common but hard to solve problem. Unsure for mysql, but PostreSQL uses sequences to generate automatic ids. Inserting frameworks (object relationnal mappers) use that when they expect to insert many values: they query directly the sequence for a bunch of IDs and then insert new rows using those already known IDs. That way, no need for an additional query after each insert to get the ID.
The downside is that the relation ID - insertion time can be non monotonic when different writers intermix their inserts. It is not a problem for the database, but some (poorly written?) program could expect it is.
As you ID is autoincremental, you can do only two SELECT queries - before and after INSERT queries:
SELECT AUTO_INCREMENT FROM information_schema.tables WHERE table_name = 'dbTable' AND table_schema = DATABASE();
--
-- INSERT INTO dbTable... (one or many, does not matter);
--
SELECT LAST_INSERT_ID() AS lastID;
This will give you the siquence between first and last inserted IDs. Then you can easily calculate how many they are.
I have been searching for a while on how to get the generated auto-increment ID from an "INSERT . INTO ... (...) VALUES (...)". Even on stackoverflow, I only find the answer of using a "SELECT LAST_INSERT_ID()" in a subsequent query. I find this solution unsatisfactory for a number of reasons:
1) This will effectively double the queries sent to the database, especially since it is mostly handling inserts.
2) What will happen if more than one thread access the database at the same time? What if more than one application accesses the database at the same time? It seems to me the values are bound to become erroneous.
It's hard for me to believe that the MySQL C++ Connector wouldn't offer the feature that the Java Connector as well as the PHP Connector offer.
An example taken from http://forums.mysql.com/read.php?167,294960,295250
sql::Statement* stmt = conn->createStatement();
sql::ResultSet* res = stmt->executeQuery("SELECT ##identity AS id");
res->next();
my_ulong retVal = res->getInt64("id");
In nutshell, if your ID column is not an auto_increment column then you can as well use
SELECT ##identity AS id
EDIT:
Not sure what do you mean by second query/round trip. First I thought you are trying to know a different way to get the ID of the last inserted row but it looks like you are more interested in knowing whether you can save the round trip or not?
If that's the case, then I am completely agree with #WhozCraig; you can punch in both your queries in a single statement like inser into tab value ....;select last_inserted_id() which will be a single call
OR
you can have stored procedure like below to do the same and save the round trip
create procedure myproc
as
begin
insert into mytab values ...;
select last_inserted_id();
end
Let me know if this is not what you are trying to achieve.
I have a table which is updated with some regularity and I want to be able to version it so that I can at any time roll back to a previous version. I want to do this at an abstract level so that I am not versioning the data itself (i.e. having versioning being part of the table) but rather storing the transaction log in another table. What is the best way of doing this?
Possible solution: add a trigger for onInsert, onUpdate, onDelete, etc. and have those perform the relevant inserts on some other table but it seems like more work than necessary seeing as how sqlite already has a transaction log: if only I could somehow query the log (and have it not be deleted).
For example:
CREATE TABLE Contacts(
id INT PRIMARY KEY,
first VARCHAR,
middle VARCHAR,
last VARCHAR,
phone VARCHAR,
date TIMESTAMP
);
With that table, I would want to be able to answer the question: "What phone number did so and so have prior to 'some day'" or "Did so and so ever change their first name?"
Sounds like you want to store the DATA as different versions - so for example, you could have a table that contains the filename (or some similar identifier) and a version number, and a second table that contains the the content of each "file" (based on some ID). When a file is updated, the file is stored again. If you want to do it like a proper version system, you store the difference between the previous and the new data in the second table [or the latest file and the difference towards the older versions of the file, or some variation on that theme - there are many open source version control systems out there, all of which have slightly different solutions to how the actual file-data is stored].
However, there are also existing encrypted variants of version control systems, such as GIST - which may be doing what you want in the first place, but without further details of exactly what you want to do, it's hard to say.
I have a simple database and want to update an int value. I initially do a query and get back a ResultSet (sql::ResultSet). For each of the entries in the result set I want to modify a value that is in one particular column of a table, then write it back out to the database/update that entry in that row.
It is not clear to me based on the documentation how to do that. I keep seeing "Insert" statements along with updates - but I don't think that is what I want - I want to keep most of the row of data intact - just update one column.
Can someone point me to some sample code or other clear reference/resource?
EDIT:
Alternatively, is there a way to tell the database to update a particular field (row/col) to increment an int value by some value?
EDIT:
So what is the typical way that people use MySQL from C++? Use the C api or the mysql++? I guess I chose the wrong API...
From a quick scan of the docs it appears Connector/C++ is a partial implementation of the Java JDBC API for C++. I didn't find any reference to updateable result sets so this might not be possible. In Java JDBC the ResultSet interface includes support for updating the current row if the statement was created with ResultSet.CONCUR_UPDATABLE concurrency.
You should investigate whether Connector/C++ supports updateable resultsets.
EDIT: To update a row you will need to use a PreparedStatement containing an SQL UPDATE, and then the statement's executeUpdate() method. With this approach you must identify the record to be update with a WHERE clause. For example
update users set userName='John Doe' where userID=?
Then you would create a PreparedStatement, set the parameter value, and then executeUpdate().