I have an h2 database with two tables with foreign keys like
CREATE TABLE master (masterId INT PRIMARY KEY, slaveId INT);
CREATE TABLE slave (slaveId INT PRIMARY KEY, something INT,
CONSTRAINT fk FOREIGN KEY (slaveId) REFERENCES master(slaveId));
and need something like
DELETE master, slave FROM master m JOIN slave s ON m.slaveId = s.slaveId
WHERE something = 43;
i.e., delete from both tables (which AFAIK works in MySql). Because of the FOREIGN KEY I can't delete from the master first. When I start by deleting from the slave, I lose the information which rows to delete from the master. Using ON DELETE CASCADE would help, but I don't want it happen automatically every time. Should I allow it temporarily? Should I use a temporary table or what is the best solution?
Nobody answers, but it's actually trivial:
SET REFERENTIAL_INTEGRITY FALSE;
BEGIN TRANSACTION;
DELETE FROM master WHERE slaveId IN (SELECT slaveId FROM slave WHERE something = 43);
DELETE FROM slave WHERE something = 43;
COMMIT;
SET REFERENTIAL_INTEGRITY TRUE;
Related
pls help me with logic.
I have two tables of customers and transactions and there is column action I, U, D. If column action is I or U upsert the data if it is D delete the data in transactions tables.If all records of same transaction id are deleted then delete customers record else delete the transactions record
We can do insert,upsert,delete using Update strategy in transaction table but how can we delete the customer record if the same transaction IDs deleted
You need to create a logic ( like you said ) to delete from customer table. And its safer to either create a new pipeline in same mapping or a brand new mapping.
So, you will read customer_key from customer, do a lookup into transaction table(condition on customer_key), if you see no row found, delete that customer.
Read all customer_key from customer table.
Lookup on transaction table on customer_key. return customer_key.
Use update strategy, link customer_key from SQ #1 and customer_key from lookup. create a condition like this
IIF ( lkp_customer_key is null, DD_DELETE)
Link customer_key from SQ #1 to the customer target.
You can do this using left join too in source qualifier as well.
most of database servers on delete, it cascade the update on respective tables
I am inheriting DynamoDB from someone.
There is a table called Item. It used an item id as a partition key which is also primary key(no sort key in this table).
Each item has a Tags attribute which is a list like tag1, tag2, etc. Now, I got a new use case that I want to query item by tags efficiently. What is the best solution to this?
I am thinking creating another table for Tags which will be a partition key and item id becomes its sort key. Is it the best solution besides re-designing Item table?
Partition key(Primary key) tags. name other attributes....
id1 t1,t2. Item1Name ...
id2 t1,t3,t4,t5. Item2Name ...
...
My idea is to create another table, is it the best solution? any idea is appreciated.
Partition key(Primary key) sort key
t1 id1
t1 id2
t2 id1
t3 id2
t4 id2
t5 id2
...
I think the best solution would require you to recreate the table and take the benefit of using a GSI (Global Secondary Index).
If you create the DynamoDB table to have a partition of primary key and then then a sort key of the tag then the data associated with the row you would perform a query like normal to retrieve based on your ID.
You would then create a GSI with the partition key of the tag (and perhaps sort key as the id assuming you need it) along with projecting any attributes you would want available to the GSI.
This approach is better than attempting to manage data between 2 seperate DynamoDB tables as you will only have to make the change once but can retrieve the data easily for both scenarios.
I have a database in Qt.
it has four tables: maingroup, subgroup, parts, and position.this is my database:
CREATE TABLE `maingroup` (
`groupName`TEXT NOT NULL UNIQUE,
PRIMARY KEY(`groupName`)
);
CREATE TABLE `subgroup` (
`sub` TEXT NOT NULL UNIQUE,
`main` TEXT NOT NULL,
PRIMARY KEY(`sub`),
FOREIGN KEY(`main`) REFERENCES `maingroup`(`groupName`) ON DELETE CASCADE
);
CREATE TABLE `parts` (
`ID` INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT UNIQUE,
`Part_Number` TEXT,
`Type` TEXT NOT NULL,
`Value` TEXT,
`Voltage` TEXT,
`Quantity` TEXT,
`Position` TEXT,
`Picture` TEXT,
FOREIGN KEY(`Position`) REFERENCES `Position`(`Poistion`) ON DELETE CASCADE,
FOREIGN KEY(`Type`) REFERENCES `subgroup`(`sub`) ON DELETE CASCADE
);
Type in table parts is foreign key refers to column sub from table subgroup.
main in table subgroup is foreign key refers to column groupname in table maingroup.
my problem is when I try (delete from maingroup WHERE groupName= 'dd';) in DB Browser it deletes both parent and children.
But in QT this command(myQuery.exec("delete from maingroup WHERE groupName= 'dd'");) just deletes the parent field in maingroup table and not the child in subgroup and part table and the main column in subgroup table refers to a field in maingroup table that does not exist.
what is wrong here?what should i do?
You need to turn on the foreign-key pragma by executing another statement before your DELETE statement.
QSqlQuery q;
q.exec("PRAGMA foreign_keys = ON");
q.exec("DELETE FROM ...");
This was able to cascade deletes, and should also be sufficient to solve other foreign-key related issues.
Credits to this forum.qt.io post.
In addition to #TrebledJ correct and very helpful answer it's worth to mention two additional characteristics about the foreign-key pragma (in connection with Qt):
1. The pragma can set via QSqlDatabase, too.
So the following code has the same effect as #TrebledJ's example:
auto database = QSqlDatabase::database();
database.exec("PRAGMA foreign_keys = ON");
QSqlQuery query(database); // query "inherits" the pragma from database
query.exec("DELETE FROM ...");
2. This behavior even applies if opening and using the database happens at different places in the program.
Still the same effect:
Somewhere in initialization code:
// this calls database.open() implicit because database was not open before.
auto database = QSqlDatabase::database();
QSqlDatabase::database();
database.exec("PRAGMA foreign_keys = ON");
// make sure database.close() is NOT called!
Somewhere else in code
// you'll get the instance of your initialization code because database is already open (so QSqlDatabase::database() implements a singleton pattern).
// so pragma foreign_keys is still set to "ON"
auto database = QSqlDatabase::database();
QSqlQuery query(database);
query.exec("DELETE FROM ...");
This might be important to know if you want to understand why the foreign_keys pragma seems to only sometimes apply in your project (and sometimes not).
What I did
So I came to the conclusion to make sure to have ONE distinct place in my code where to explicit open the database (and configure the connection):
QString dbConnectionName = "My project database";
auto database = QSqlDatabase::database(dbConnectionName, true);
// configure the pragmas here
At all other places I avoid to (accidentally) open the database:
auto database = QSqlDatabase::database(dbConnectionName, false);
// e.g. use the database via queries ...
I'm looking to ensure isolation when multiple transactions may execute a database insert or update, where the old value is required for the process.
Here is a MVP in python-like pseudo code, the default isolation level is assumed:
sql('BEGIN')
rows = sql('SELECT `value` FROM table WHERE `id`=<id> FOR UPDATE')
if rows:
old_value, = rows[0]
process(old_value, new_value)
sql('UPDATE table SET `value`=<new_value> WHERE `id`=<id>')
else:
sql('INSERT INTO table (`id`, `value`) VALUES (<id>, <new_value>)')
sql('COMMIT')
The issue with this is that FOR UPDATE leads to an IS lock, which does not prevent two transactions to proceed. This results in a deadlock when both transaction attempt to UPDATE or INSERT.
Another way to do is first try to insert, and update if there is a duplicated key:
sql('BEGIN')
rows_changed = sql('INSERT IGNORE INTO table (`id`, `value`) VALUES (<id>, <new_value>)')
if rows_changed == 0:
rows = sql('SELECT `value` FROM table WHERE `id`=<id> FOR UPDATE')
old_value, = rows[0]
process(old_value, new_value)
sql('UPDATE table SET `value`=<new_value> WHERE `id`=<id>')
sql('COMMIT')
The issue in this solution is that a failed INSERT leads to an S lock, which does not prevent two transaction to proceed as well, as described here: https://stackoverflow.com/a/31184293/710358.
Of course any solution requiring hardcoded wait or locking the entire table is not satisfying for production environments.
A hack to solve this issue is to use INSERT ... ON DUPLICATE KEY UPDATE ... which always issues an X lock. Since you need the old value, you can perform a blank update and proceed as in your second solution:
sql('BEGIN')
rows_changed = sql('INSERT INTO table (`id`, `value`) VALUES (<id>, <new_value>) ON DUPLICATE KEY UPDATE `value`=`value`')
if rows_changed == 0:
rows = sql('SELECT `value` FROM table WHERE `id`=<id> FOR UPDATE')
old_value, = rows[0]
process(old_value, new_value)
sql('UPDATE table SET `value`=<new_value> WHERE `id`=<id>')
sql('COMMIT')
I have a table that contains about have billion records. I want to change the key of these records i.e fetch a records change its key somehow, delete what was fetched save the new records ! Let us say for example my key is [time-accountId] and I want to change it to [account-time]
I want to fetch entity create new with different key, delete the entity with [time-account] and save the new entity with [accout-time]
What is the best way to accomplish this task ?
I am thinking of M/R but how can I delete entities with M/R ?
You need a mapreduce which will produce a Put and a Delete for each row of your table. Only a mapper is needed here since you don't need aggregation on your data, so skip the reducer:
TableMapReduceUtil.initTableReducerJob(
table, // output table
null, // reducer class
job);
Your mapper has to generate both Put and Delete, so the output value class to used is the Mutation (https://hbase.apache.org/0.94/apidocs/org/apache/hadoop/hbase/client/Mutation.html):
TableMapReduceUtil.initTableMapperJob(
table, // input table
scan, // Scan instance to control CF and attribute selection
MyMapper.class, // mapper class
ImmutableBytesWritable.class, // mapper output key
Mutation.class, // mapper output value
job);
Then your mapper will look like this:
Delete delete = ...
context.write(oldKey, delete);
Put put = ...
context.write(newKey, put);