This question already has answers here:
Return deleted rows in sqlite
(3 answers)
Closed 3 years ago.
I want to delete several rows from a table by using DELETE LIKE query in C++. I know how to do it and it works. But also I want to know which rows were actually deleted. Is there a way to do it via 1 query or it is impossible and the only way to do it is
1. SELECT LIKE
2. DELETE LIKE
As alternative to SELECT -> DELETE you could also use a TRIGGER.
The following example shows/demonstrates a relatively generic way (a little inefficient as such) for both :-
DROP TABLE IF EXISTS mytable;
DROP TABLE IF EXISTS mytable_deletions;
DROP TABLE IF EXISTS mytable_deletions2;
DROP TRIGGER IF EXISTS mytable_deletions_trigger;
/* MAIN TABLE */
CREATE TABLE IF NOT EXISTS mytable(id INTEGER PRIMARY KEY, mydata TEXT);
/* CREATE TABLES TO RECORD DELETIONS (first for 1. select->delete the other for 2. trigger)*/
/* Note that these are created based upon the source table, with an additional column srowid so as to be able to definitively delete rows (extra column may not be needed) */
CREATE /*TEMP option? */ TABLE IF NOT EXISTS mytable_deletions AS SELECT 0 AS srowid,* FROM mytable WHERE mydata <> mydata;
CREATE /*TEMP option? */ TABLE IF NOT EXISTS mytable_deletions2 AS SELECT 0 AS srowid,* FROM mytable WHERE mydata <> mydata;
/* Create the BEFORE DELETE Trigger */
CREATE TRIGGER IF NOT EXISTS mytable_deletions_trigger BEFORE DELETE ON mytable
BEGIN
INSERT INTO mytable_deletions2 SELECT rowid AS srowid,* FROM mytable WHERE rowid = old.rowid;
END
;
/* LOAD TESTING DATA */
INSERT INTO mytable (mydata) VALUES('A'),('B'),('C'),('D'),('E');
/* CLEAR PREVIOUS DELETIONS */
DELETE FROM mytable_deletions;
DELETE FROM mytable_deletions2;
/* 1. RECORD DELETIONS PRIOR AND DELETE ACCORDING TO RECORDED DELETIONS */
INSERT INTO mytable_deletions SELECT rowid AS srowid,* FROM mytable WHERE mydata >= 'C';
/* DO THE ACTUAL DELETIONS (will fire trigger 2.) */
DELETE FROM mytable WHERE rowid IN (SELECT srowid FROM mytable_deletions);
/* Display results from TRIGGER */
SELECT * FROM mytable;
SELECT * FROM mytable_deletions;
SELECT * FROM mytable_deletions2;
/* CLEANUP */
DROP TABLE IF EXISTS mytable;
DROP TABLE IF EXISTS mytable_deletions;
DROP TABLE IF EXISTS mytable_deletions2;
DROP TRIGGER IF EXISTS mytable_deletions_trigger;
Running the above results in :-
Remaining data :-
Deleted Rows (select->delete)
Deleted Rows (trigger)
Related
I’m using MySQL for C++ and I want to create a new table for all the tables in my second database. The code I have now is:
CREATE TABLE new_table LIKE original_table;
INSERT INTO new_table SELECT * FROM original_table;
I want to this to work like a loop where all the tables and data in those tables are created for every table and piece of data there is in my second database. Can someone help me?
We can use a stored procedure to do the job in a loop. I just wrote the code and tested it in workbench. Got all my tables(excluding view) from sakila database to my sakila_copy database:
use testdb;
delimiter //
drop procedure if exists copy_tables //
create procedure copy_tables(old_db varchar(20),new_db varchar(20))
begin
declare tb_name varchar(30);
declare fin bool default false;
declare c cursor for select table_name from information_schema.tables where table_schema=old_db and table_type='BASE TABLE';
declare continue handler for not found set fin=true;
open c;
lp:loop
fetch c into tb_name;
if fin=true then
leave lp;
end if;
set #create_stmt=concat('create table ',new_db,'.',tb_name,' like ',old_db,'.',tb_name,';') ;
prepare ddl from #create_stmt;
execute ddl;
deallocate prepare ddl;
set #insert_stmt=concat('insert into ',new_db,'.',tb_name,' select * from ',old_db,'.',tb_name,';');
prepare dml from #insert_stmt;
execute dml;
deallocate prepare dml;
end loop lp;
close c;
end//
delimiter ;
create database sakila_copy;
call testdb.copy_tables('sakila','sakila_copy');
-- after the call, check the tables in sakila_copy to find the new tables
show tables in sakila_copy;
Note: As I stated before, only base tables are copied. I deliberately skipped views, as they provide logical access to tables and hold no data themselves.
I am looking at some SAS/Teradata code and confused on the below. This has a volatile table and a multiset volatile table. What is the difference between the two? Also, why does this specify WITH DATA PRIMARY INDEX? Also for the second one, why does this collect statistics?
PROC SQL ;
CONNECT TO TERADATA (AUTHDOMAIN=IDWPRD SERVER= IDWPRD MODE=TERADATA);
EXECUTE(
CREATE VOLATILE TABLE REQ1_1_CODE_INS AS (
SELECT
ACCT_REF_NB,
CAST(NON_MNTR_TXN_PST_TS AS DATE) AS ADJ_DT,
SRC_DATA_DT,
NON_MNTR_TXN_SEQ_NB,
SRC_CRE_USER_ID,
PROC_TRAN_CD,
PROC_TRCK_ID,
MAX(CASE WHEN NON_MNTR_TXN_SBTP_CD = '0009' THEN TRIM(NEW_NON_MNTR_TXN_DTL_TX) ELSE NULL END) AS CARD_NB
FROM DWHMGR.PST_NON_MNTR_TXN
WHERE NON_MNTR_TXN_TP_CD ='255'
AND CAST(NON_MNTR_TXN_PST_TS AS DATE) >= '2016-03-13'
AND CAST(NON_MNTR_TXN_PST_TS AS DATE) <= '2017-11-09'
GROUP BY 1,2,3,4,5,6,7
HAVING TXN_DT <= ADD_MONTHS(ADJ_DT, -24)
OR UPPER(MRCH_NM) LIKE '%CHECK TO%'
OR UPPER(MRCH_NM) LIKE '%BALANCE TRANSFER%'
)WITH DATA PRIMARY INDEX(ACCT_REF_NB) ON COMMIT PRESERVE ROWS;
) BY TERADATA;
CREATE TABLE UNIX.REQ1_1_CODE_INS AS SELECT * FROM CONNECTION TO TERADATA(SELECT * FROM REQ1_1_CODE_INS);
/* REFERENCE TABLE */
EXECUTE(
CREATE MULTISET VOLATILE TABLE _ACCTS_00 AS (
SELECT DISTINCT ACCT_REF_NB FROM REQ1_1_CODE_INS
) WITH DATA PRIMARY INDEX(ACCT_REF_NB) ON COMMIT PRESERVE ROWS;
) BY TERADATA;
EXECUTE( COLLECT STATISTICS ON _ACCTS_00 PRIMARY INDEX(ACCT_REF_NB); ) BY TERADATA;
Volatile table is like work table in SAS, it just is there for particular session.
Teradata has 2 kinds of table, one is set table and another is multiset table. Set table does not allow row level duplicates, where multiset table allows row level duplicates. Default is set table if nothing is mentioned in create table statement.
Teradata also needs a primary index and needs to mentioned as with data primary index(index name). with data gets data another option is with no data
collect stats is big concept, basically it collects demographic data for primary index, which in return helps in future queries dependent on that index.
I have a large target table with columns (id, value). I want to update value='old' to value='new'.
The simplest way would be to UPDATE target SET value='new' WHERE value='old';
However, this deletes and creates new rows and is not recommended, possibly. So I tried to do a merge column update:
# staging
CREATE TABLE stage (LIKE target INCLUDING DEFAULTS);
INSERT INTO stage (SELECT id, value FROM target WHERE value=`old`);
UPDATE stage SET value='new' WHERE value='old'; # ??? how do you update value?
# merge
begin transaction;
UPDATE target
SET value = stage.value FROM stage
WHERE target.id = stage.id and target.distkey = stage.distkey; # collocated join?
end transaction;
DROP TABLE stage;
This can't be the best way of creating the table stage: I have to do all these UPDATE delete/writes when I update this way. Is there a way to do it in the INSERT?
Is it necessary to force the collocated join when I use CREATE TABLE LIKE?
Are you updating all the rows in the table?
If yes you can use CTAS (create table as) which is recommended method
Assuming you table looks like this
table1
id, col1,col2, value
You can use the following SQL to create a new table
CREATE TABLE tmp_table AS
SELECT id, col1,col2, 'new_value'
FROM table1;
After you verify data in tmp_table
DROP TABLE table1;
ALTER TABLE tmp_table RENAME TO table1;
If you are not updating all the rows you can use a filter to do a CTAS and insert the rest of the rows to the new table, let me know if you need more info if this is the case
CREATE TABLE tmp_table AS
SELECT id, col1,col2, 'new_value'
FROM table1
WHERE value = 'old'
INSERT INTO tmp_table SELECT * from table1;
Next step would be DROP the tmp table and rename table1
Update: Based on your comment you can do the following, let me know if this solves your case.
This method basically creates a new table to replace your existing table.
I have used some of your code
CREATE TABLE stage (LIKE target INCLUDING DEFAULTS);
INSERT INTO stage SELECT id, 'new' FROM target WHERE value=`old`;
Above INSERT inserts rows to be updated with 'new', no need to run an UPDATE after this.
Bring unchanged rows
INSERT INTO stage SELECT id, value FROM target WHERE value!=`old`;
After this point you have target table which is your original table intact
stage table will have both sets of rows, updated rows with 'new' value and rows you did not want to change
To replace your target with stage
DROP TABLE target;
or to keep it further verification
ALTER TABLE target RENAME TO target_old;
ALTER TABLE stage RENAME TO target;
From a redshift developer:
This case doesn't require an upsert, or update+insert, and it is fine to just run the update:
UPDATE target SET value='new' WHERE value='old';
Another way would be to INSERT the rows you need and DELETE the other rows, but that's unnecessarily complicated.
I have an SQLite table of ~1M rows. Each row has a structure of (docId, docBLOB). Each docBlob is nearly 20Kb.
I have to perform SELECT by an externally provided list of docIDs. Each list may be nearly 100K elements long. How can I do it more efficiently?
Maybe there is a way to make SELECT * IN docBlobTable WHERE docId IN ( [MEGALIST] ) statement?
Put all the IDs into a temporary table, then use:
SELECT * FROM docBlobTable WHERE docId IN (SELECT ID FROM TempTable)
or:
SELECT docBlobTable.*
FROM docBlobTable
JOIN TempTable ON docBlobTable.docId = TempTable.ID
I Develop with MFC Visual C++ and Oracle SQL Server.
I have SQL table with: IDs, value and time, when the application insert a new row: some ID, some Value and time being inserted.
My goal is to delete rows of values that were changed between certain time. since the data that was inserted during that time has incorrect value.
Where is the catch ? I dont need to delete all the rows that were updated in that time period, only the rows with IDs that appear on a certain CArray.
I can go through each ID from CArray and execute a delete query to that certain ID in that time period (whether there is entry or not) - problem since i can have 150K IDs to iterate
on..
Thanks
DELETE FROM table-name WHERE id in (...)
transform your array into a tempTable with one column and then delete from your destiantion table where ID in (select Id from temptable)
Here is an example:
declare #RegionID varchar(50)
SET #RegionID = '853,834,16,467,841'
declare #S varchar(20)
if LEN(#RegionID) > 0 SET #RegionID = #RegionID + ','
CREATE TABLE #ARRAY(region_ID VARCHAR(20))
WHILE LEN(#RegionID) > 0 BEGIN
SELECT #S = LTRIM(SUBSTRING(#RegionID, 1, CHARINDEX(',', #RegionID) - 1))
INSERT INTO #ARRAY (region_ID) VALUES (#S)
SELECT #RegionID = SUBSTRING(#RegionID, CHARINDEX(',', #RegionID) + 1, LEN(#RegionID))
END
delete from from your_table
where regionID IN (select region_ID from #ARRAY)