I configured my firebird database to autoincrement the primary key of the table.
CREATE GENERATOR GEN_CHANNEL_PARAMETER_SET_ID;
SET GENERATOR GEN_CHANNEL_PARAMETER_SET_ID TO 0;
CREATE TRIGGER CHANNEL_PARAMETER_SETS_BI FOR CHANNEL_PARAMETER_SETS
ACTIVE BEFORE INSERT POSITION 0
AS
BEGIN
if (NEW.CHANNEL_PARAMETER_SET_ID is NULL) then NEW.CHANNEL_PARAMETER_SET_ID = GEN_ID(GEN_CHANNEL_PARAMETER_SET_ID, 1);
END
Now, in my C++ program using IBPP I have the following problem:
When inserting a dataset into an new row of this table I know all values in my C++ program exept the new primary key because the database creates it. How can I retrieve this key form the database?
Maybe someone else inserted an entry too - just a moment after I inserted one. So retrieve the PK with the highest value could create an error. How can I handle this?
Adopting Amir Rahimi Farahani's answer I found the following solution for my problem:
I use a generator:
CREATE GENERATOR GEN_CHANNEL_PARAMETER_SET_ID;
SET GENERATOR GEN_CHANNEL_PARAMETER_SET_ID TO 0;
and the following C++/IBPP/SQL code:
// SQL statement
m_DbStatement->Execute(
"SELECT NEXT VALUE FOR gen_channel_parameter_set_id FROM rdb$database"
);
// Retrieve Data
IBPP::Row ibppRow;
int64_t channelParameterSetId;
m_DbStatement->Fetch(ibppRow);
ibppRow->Get (1, channelParameterSetId);
// SQL statement
m_DbStatement->Prepare(
"INSERT INTO channel_parameter_sets "
"(channel_parameter_set_id, ...) "
"VALUES (?, ...) "
);
// Set variables
m_DbStatement->Set (1, channelParameterSetId);
...
...
// Execute
m_DbStatement->Execute ();
m_DbTransaction->CommitRetain ();
It is possible to generate and use the new id before inserting the new record:
SELECT NEXT VALUE FOR GEN_CHANNEL_PARAMETER_SET_ID FROM rdb$database
You now know the value for new primary key.
Update:
IBPP supports RETURNING too:
// SQL statement
m_DbStatement->Prepare(
"INSERT INTO channel_parameter_sets "
"(...) VALUES (...) RETURNING channel_parameter_set_id"
);
// Execute
m_DbStatement->Execute ();
m_DbTransaction->CommitRetain ();
// Get the generated id
m_DbStatement->Get (1, channelParameterSetId);
...
To retrieve the value of the generated key (or any other column) you can use INSERT ... RETURNING ....
For example:
INSERT INTO myTable (x, y, z) VALUES (1, 2, 3) RETURNING ID
Also a lot of drivers provide extra features to support RETURNING, but I don't know IBPP.
Note that from the perspective of a driver the use of RETURNING will make the insert act like an executable stored procedure; some drivers might require you to execute it in a specific way.
Related
I generate a list of ID numbers. I want to execute an insert statement that grabs all records from one table where the ID value is in my list and insert those records into another table.
Instead of running through multiple execute statements (as I know is possible), I found this cx_Oracle function, that supposedly can execute everything with a single statement and list parameter. (It also avoids the clunky formatting of the SQL statement before passing in the parameters) But I think I need to alter my list before passing it in as a parameter. Just not sure how.
I referenced this web page:
https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-executemany.html
ids = getIDs()
print(ids)
[('12345',),('24567',),('78945',),('65423',)]
sql = """insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)"""
cursor.prepare(sql)
cursor.executemany(None, ids)
I expected the SQL statement to execute as follows:
Insert into scheme.newtable
select id, data1, data2, data3 from scheme.oldtable where id in ('12345','24567','78945','65423')
Instead I get the following error:
ORA-01036: illegal variable name/number
Edit:
I found this StackOverflow: How can I do a batch insert into an Oracle database using Python?
I updated my code to prepare the statement before hand and updated the list items to tuples and I'm still getting the same error.
You use executemany() for batch DML, e.g. when you want to insert a large number of values into a table as an efficient equivalent of running multiple insert statements. There are cx_Oracle examples discussed in https://blogs.oracle.com/opal/efficient-and-scalable-batch-statement-execution-in-python-cx_oracle
However what you are doing with
insert into scheme.newtable
select id, data1, data2, data3
from scheme.oldtable
where id in (%s)
is a different thing - you are trying to execute one INSERT statement using multiple values in an IN clause. You would use a normal execute() for this.
Since Oracle keeps bind data distinct from SQL, you can't pass in multiple values to a single bind parameter because the data is treated as a single SQL entity, not a list of values. You could use %s string substitution syntax you have, but this is open to SQL Injection attacks.
There are various generic techniques that are common to Oracle language interfaces, see https://oracle.github.io/node-oracledb/doc/api.html#sqlwherein for solutions that you can rewrite to Python syntax.
using temporary table to save ids (batch insert)
cursor.prepare('insert into temp_table values (:1)')
dictList = [{'1': x} for x in ids]
cursor.executemany(None, dictList)
then insert selected value into newtable
sql="insert into scheme.newtable (selectid, data1, data2, data3 from scheme.oldtable inner join temp_table on scheme.oldtable.id = temp_table.id)"
cursor.execut(sql,connection)
the script of create temporary table in oracle
CREATE GLOBAL TEMPORARY TABLE temp_table
(
ID number
);
commit
I hope this useful.
I have a form in my Oracle APEX based application, I want to have validation on submit button, so that the combination of two specific entries, if they already are present in the SQL table/View, I want to show an alert, like "The entry for this combination of values of A and B already exists, please enter correct values."
If those two specific entries are represented by two form items (e.g. :P1_ONE and :P2_TWO), then the validation procedure might be a function that returns error text, such as
declare
l_cnt number;
retval varchar2(200);
begin
select count(*)
into l_cnt
from your_table t
where t.column_one = :P1_ONE
and t.column_two = :P1_TWO;
if l_cnt > 0 then
retval := 'The entry for this combination already exists';
end if;
end;
The query itself might need to be modified, depending on what exactly you meant by describing the problem; that's the way I understood it.
Then you should have a unique constraint on the table, and let that validate incoming data.
Any violation of this constraint will have exception raised, which can be transformed within the APEX error handling procedure.
Using the ignite C++ API, I'm trying to find a way to perform an SqlFieldsQuery to select a specific field, but would like to do this for a set of keys.
One way to do this, is to do the SqlFieldsQuery like this,
SqlFieldsQuery("select field from Table where _key in (" + keys_string + ")")
where the keys_string is the list of the keys as a comma separated string.
Unfortunately, this takes a very long time compared to just doing cache.GetAll(keys) for the set of keys, keys.
Is there an alternative, faster way of getting a specific field for a set of keys from an ignite cache?
EDIT:
After reading the answers, I tried changing the query to:
auto query = SqlFieldsQuery("select field from Table t join table(_key bigint = ?) i on t._key = i._key")
I then add the arguments from my set of keys like this:
for(const auto& key: keys) query.AddArgument(key);
but when running the query, I get the error:
Failed to bind parameter [idx=2, obj=159957, stmt=prep0: select field from Table t join table(_key bigint = ?) i on t._key = i._key {1: 159956}]
Clearly, this doesn't work because there is only one '?'.
So I then tried to pass a vector<int64_t> of the keys, but I got an error which basically says that std::vector<int64_t> did not specialize the ignite BinaryType. So I did this as defined here. When calling e.g.
writer.WriteInt64Array("data", data.data(), data.size())
I gave the field a arbitrary name "data". This then results in the error:
Failed to run map query remotely.
Unfortunately, the C++ API is neither well documented, nor complete, so I'm wondering if I'm missing something or that the API does not allow for passing an array as argument to the SqlFieldsQuery.
Query that uses IN clause doesn't always use indexes properly. The workaround for this is described here: https://apacheignite.readme.io/docs/sql-performance-and-debugging#sql-performance-and-usability-considerations
Also if you have an option to to GetAll instead and lookup by key directly, then you should use it. It will likely be more effective anyway.
Query with operator "IN" will not always use indexes. As a workaround, you can rewrite the query in the following way:
select field from Table t join table(id bigint = ?) i on t.id = i.id
and then invoke it like:
new SqlFieldsQuery(
"select field from Table t join table(id bigint = ?) i on t.id = i.id")
.setArgs(new Object[]{ new Integer[] {2, 3, 4} }))
I am using this link.
I have connected my cpp file with Eclipse to my Database with 3 tables (two simple tables
Person and Item
and a third one PersonItem that connects them). In the third table I use one simple primary and then two foreign keys like that:
CREATE TABLE PersonsItems(PersonsItemsId int not null auto_increment primary key,
Person_Id int not null,
Item_id int not null,
constraint fk_Person_id foreign key (Person_Id) references Person(PersonId),
constraint fk_Item_id foreign key (Item_id) references Items(ItemId));
So, then with embedded sql in c I want a Person to have multiple items.
My code:
mysql_query(connection, \
"INSERT INTO PersonsItems(PersonsItemsId, Person_Id, Item_id) VALUES (1,1,5), (1,1,8);");
printf("%ld PersonsItems Row(s) Updated!\n", (long) mysql_affected_rows(connection));
//SELECT newly inserted record.
mysql_query(connection, \
"SELECT Order_id FROM PersonsItems");
//Resource struct with rows of returned data.
resource = mysql_use_result(connection);
// Fetch multiple results
while((result = mysql_fetch_row(resource))) {
printf("%s %s\n",result[0], result[1]);
}
My result is
-1 PersonsItems Row(s) Updated!
5
but with VALUES (1,1,5), (1,1,8);
I would like that to be
-1 PersonsItems Row(s) Updated!
5 8
Can somone tell me why is this not happening?
Kind regards.
I suspect this is because your first insert is failing with the following error:
Duplicate entry '1' for key 'PRIMARY'
Because you are trying to insert 1 twice into the PersonsItemsId which is the primary key so has to be unique (it is also auto_increment so there is no need to specify a value at all);
This is why rows affected is -1, and why in this line:
printf("%s %s\n",result[0], result[1]);
you are only seeing 5 because the first statement failed after the values (1,1,5) had already been inserted, so there is still one row of data in the table.
I think to get the behaviour you are expecting you need to use the ON DUPLICATE KEY UPDATE syntax:
INSERT INTO PersonsItems(PersonsItemsId, Person_Id, order_id)
VALUES (1,1,5), (1,1,8)
ON DUPLICATE KEY UPDATE Person_id = VALUES(person_Id), Order_ID = VALUES(Order_ID);
Example on SQL Fiddle
Or do not specify the value for personsItemsID and let auto_increment do its thing:
INSERT INTO PersonsItems( Person_Id, order_id)
VALUES (1,5), (1,8);
Example on SQL Fiddle
I think you have a typo or mistake in your two queries.
You are inserting "PersonsItemsId, Person_Id, Item_id"
INSERT INTO PersonsItems(PersonsItemsId, Person_Id, Item_id) VALUES (1,1,5), (1,1,8)
and then your select statement selects "Order_id".
SELECT Order_id FROM PersonsItems
In order to achieve 5, 8 as you request, your second query needs to be:
SELECT Item_id FROM PersonsItems
Edit to add:
Your primary key is autoincrement so you don't need to pass it to your insert statement (in fact it will error as you pass 1 twice).
You only need to insert your other columns:
INSERT INTO PersonsItems(Person_Id, Item_id) VALUES (1,5), (1,8)
Im using soci with C++ to access my database. Is it possible to modify the following expression in a way to get the new primary key that was given to the row which is added by that expression?
*dbSession << "insert into myTable(myRow) values (:myVal)", soci::use(myVal);
e.g.
long newID = *dbSession << "insert into myTable(myRow) values (:myVal)", soci::use(myVal);
So that I can continue my work by using newID? id is in this case the primary key (bigserial)
In SQL you can use RETURNING to get the generated ID.
Like: INSERT INTO tbloCustomer (Name) VALUES ('Goofy') RETURNING ID;
(If your Primary Key is called ID ;)