SQL Server CTE with multiple delete statements - common-table-expression

I have a CTE:
;WITH DeleteTarget AS
(
....
)
How do I use this CTE for two delete statements - maybe like:
DELETE FROM [TableA]
WHERE ColumnA IN (SELECT Id FROM DeleteTarget)
DELETE FROM [TableB]
WHERE ColumnB IN (SELECT Name FROM DeleteTarget)

You cannot - a CTE only exists for the one, next statement.
If you need this information that the CTE provides more than once, you need to:
store the result set into a table variable or temp table
then execute your multiple statements using that table variable / temp table

Related

Getting table names and row counts for all tables in an athena database

I have an AWS database with multiple tables that I am trying to get the row counts for in a single query.
The ideal query output would be:
table_name row_count
table2_name row_count
etc...
So far I've been able to either get all the table names from the database or all the rowcounts of the tables (in random order), but not both in the same query.
This query returns a column of all the table names that exist in the database:
SELECT table_name FROM information_schema.tables WHERE table_schema = '<database_name>';
This query returns all the row counts for the tables:
SELECT COUNT(*) FROM table_name
UNION ALL
SELECT COUNT(*) FROM table2_name
UNION ALL
etc..for the rest of the tables
The issue with this query is that is displays the row counts in a random order that doesn't correspond with the order of the tables in the query, and so I don't know which row count goes with which table - hence why I need both the table names and row counts.
Simply add the names of the tables as literals in your queries:
SELECT 'table_name' AS table_name, COUNT(*) AS row_count FROM table_name
UNION ALL
SELECT 'table_name2' AS table_name, COUNT(*) AS row_count FROM table_name2
UNION ALL
…
The following query generates the UNION query to produce counts of all records.
The problem to solve is that (as of December 2022) INFORMATION_SCHEMA.TABLES incorrectly defines every table and view as a BASE TABLE so you will need some logic to eliminate the views.
In Data Warehousing it is common practise to record snapshots of the record counts of landing tables at frequent intervals. Any unexpected deviations from expected counts can be used for reporting/alerting
WITH Table_List AS (
SELECT table_schema,table_name, CONCAT('SELECT CURRENT_DATE AS run_date, ''',table_name, ''' AS table_name, COUNT(*) AS Records FROM "',table_schema,'"."', table_name, '"') AS BaseSQL
FROM INFORMATION_SCHEMA.TABLES
WHERE
table_schema = 'YOUR_DB_NAME' -- Change this
AND table_name LIKE 'YOUR TABLE PATTERN%' -- Change or remove this line
)
, Total_Records AS (
SELECT COUNT(*) AS Table_Count
FROM Table_List
)
SELECT
CASE WHEN ROW_NUMBER() OVER (ORDER BY table_name) = Table_Count
THEN BaseSQL
ELSE CONCAT(BaseSql, ' UNION ALL') END AS All_Table_Record_count_SQL
FROM Table_List CROSS JOIN Total_Records
ORDER BY table_name;

How to assign a Count of dataset to variable and use it in IF ELSE statement

I want to store count of dataset in variable like below
%let Cnt ;
create table work.delaycheck as
select * from connection to oracle
(
SELECT PTNR_ID,CLNT_ID,REPORTING_DATE_KEY,NET_SALES
FROM FACT_TABLE
MINUS
SELECT PTNR_ID,CLNT_ID,REPORTING_DATE_KEY,NET_SALES
FROM HIST_FCT
);
I want to store count of this table in the variable Cnt like below
%put = (Select count(*) from work.delaycheck )
And Then
If(Cnt=0)
THEN
DO NOTHING
ELSE
execute(
Insert into Oracle_table
select * from work.delaycheck
) by oracle;
disconnect from oracle;
quit;
How can I acheive these steps? Thanks In advance!!
All of the SQL and data shown is occurring remotely. You can perform all the activity there without involving SAS. Oracle will process
PROC SQL;
CONNECT TO ORACLE ...;
EXECUTE (
INSERT INTO <TARGET_TABLE>
SELECT * FROM
( SELECT PTNR_ID,CLNT_ID,REPORTING_DATE_KEY,NET_SALES
FROM FACT_TABLE
MINUS
SELECT PTNR_ID,CLNT_ID,REPORTING_DATE_KEY,NET_SALES
FROM HIST_FCT
)
) BY ORACLE;
and not insert any records if the fact table is comprised of only historical facts.
EXECUTE can also submit PL/SQL statements, which in-turn can reduce the need for extraneous system interplay.
Delete this line from your code
%let Cnt ;
In order to get the count: Add the code below which will create the macro variable Cnt with the count:
proc sql;
Select count(*) into: Cnt from work.delaycheck ;
quit;
Update the if statement: the "&" is used to reference macro variables
If &cnt=0
The Code below shows how to use the if/else and the use of Call Execute:
data _null_;
if &cnt=0 then put 'Cnt is 0';/*if true: a note is written to the log*/
else call execute ('proc print data=work.e; run;');
/*else clause: the Proc Print code is executed*/
run;

How to insert records from other tables to a table with more columns in sas

I have two sas tables, A and B, A has two columns (i.e., columna columnb) and table B has four columns (i.e., columna columnb columnc columnd ), I wish to insert records from table A to table B, I tried the following, but it shows me errors:
PROC SQL;
insert into B
select *, columnc='a', columnd='b' from A;
QUIT;
Assuming you just want to leave the extra columns empty then don't include them in the insert. It is much easier to just use SAS code instead of SQL code.
proc append base=b data=a force nowarn;
run;
For the SQL Insert statement you need to specify which columns in the target table you are writing into, otherwise it assumes you will specify values for all of them.
insert into B (columna,columnb)
select columna,columnb
from A
;
If instead you want to fill the extra columns with constants then include the constants in the SELECT list.
insert into B (columna,columnb,columnc,columnd)
select columna,columnb,'a','b'
from A
;
If you are positive that you are providing the values in the right order then you can leave the column names off of the target table specification.
insert into B
select *,'a','b'
from A
;
You can't specify the variable name that way; in fact, you can't specify the variable at all using insert into. See this example:
proc sql;
create table class like sashelp.class;
alter table class
add rownum numeric;
alter table class
add othcol numeric;
insert into class
select *, 1 as othcol, monotonic() as rownum from sashelp.class;
quit;
Here I use as to specify the column name, but notice that it doesn't actually work: it puts 1 in the rownum column, and the monotonic() value in othcol, since they're in that order on the table.

Sql UPATE with multiple conditions in WHERE clause

UPDATE Table1
SET [Marks] =
(
SELECT
CASE STATEMENTS
FROM Table2 T2
WHERE Table1.ID = T2.ID)
)
The above UPDATE statements works fine, but if the ID doesn't match then it insert NULL value for 'Marks'.
But i wanted to keep the original value for Marks in Table1 if the Table1 and Table2 ID doesn't match.
How do i implement that in my code please.
i also tried using WHERE EXISTS BUT STILL no luck. I wonder whats the exact use of it.
Any help much appreciated.
UPDATE Table1
SET [Marks] =
(
SELECT
CASE STATEMENTS
FROM Table2 T2
WHERE Table1.ID = T2.ID)
)
WHERE id IN (SELECT id FROM table2)

Alter column data type in Amazon Redshift

How to alter column data type in Amazon Redshift database?
I am not able to alter the column data type in Redshift; is there any way to modify the data type in Amazon Redshift?
As noted in the ALTER TABLE documentation, you can change length of VARCHAR columns using
ALTER TABLE table_name
{
ALTER COLUMN column_name TYPE new_data_type
}
For other column types all I can think of is to add a new column with a correct datatype, then insert all data from old column to a new one, and finally drop the old column.
Use code similar to that:
ALTER TABLE t1 ADD COLUMN new_column ___correct_column_type___;
UPDATE t1 SET new_column = column;
ALTER TABLE t1 DROP COLUMN column;
ALTER TABLE t1 RENAME COLUMN new_column TO column;
There will be a schema change - the newly added column will be last in a table (that may be a problem with COPY statement, keep that in mind - you can define a column order with COPY)
to avoid the schema change mentioned by Tomasz:
BEGIN TRANSACTION;
ALTER TABLE <TABLE_NAME> RENAME TO <TABLE_NAME>_OLD;
CREATE TABLE <TABLE_NAME> ( <NEW_COLUMN_DEFINITION> );
INSERT INTO <TABLE_NAME> (<NEW_COLUMN_DEFINITION>)
SELECT <COLUMNS>
FROM <TABLE_NAME>_OLD;
DROP TABLE <TABLE_NAME>_OLD;
END TRANSACTION;
(Recent update) It's possible to alter the type for varchar columns in Redshift.
ALTER COLUMN column_name TYPE new_data_type
Example:
CREATE TABLE t1 (c1 varchar(100))
ALTER TABLE t1 ALTER COLUMN c1 TYPE varchar(200)
Here is the documentation link
If you don't want to change the column order, an option will be creating a temp table, drop & create the new one with desired size and then bulk again the data.
CREATE TEMP TABLE temp_table AS SELECT * FROM original_table;
DROP TABLE original_table;
CREATE TABLE original_table ...
INSERT INTO original_table SELECT * FROM temp_table;
The only problem recreating the table is that you will need to grant again permissions and if the table is too bigger it will take a piece of time.
ALTER TABLE publisher_catalogs ADD COLUMN new_version integer;
update publisher_catalogs set new_version = CAST(version AS integer);
ALTER TABLE publisher_catalogs DROP COLUMN version RESTRICT;
ALTER TABLE publisher_catalogs RENAME new_version to version;
Redshift being columnar database doesn't allow you to modify the datatype directly,
however below is one approach this will change the column order.
Steps -
1.Alter table add newcolumn to the table
2.Update the newcolumn value with oldcolumn value
3.Alter table to drop the oldcolumn
4.alter table to rename the columnn to oldcolumn
If you don't want to alter the order of the columns then solution would be to
1.create temp table with new column name
copy data from old table to new table.
drop old table
rename the newtable to oldtable
One important thing create a new table using like command instead simple create.
This method works for converting an (big) int column into a varchar
-- Create a backup of the original table
create table original_table_backup as select * from original_table;
-- Drop the original table, and then recreate with new desired data types
drop table original_table;
create table original_table (
col1 bigint,
col2 varchar(20) -- changed from bigint
);
-- insert original entries back into the new table
insert into original_table select * from original_table_backup;
-- cleanup
drop original_table_backup;
You can use the statements below:
ALTER TABLE <table name --etl_proj_atm.dim_card_type >
ALTER COLUMN <col name --card_type> type varchar(30)
UNLOAD and COPY with table rename strategy should be the most efficient way to do this operation if retaining the table structure(row order) is important.
Here is an example adding to this answer.
BEGIN TRANSACTION;
ALTER TABLE <TABLE_NAME> RENAME TO <TABLE_NAME>_OLD;
CREATE TABLE <TABLE_NAME> ( <NEW_COLUMN_DEFINITION> );
UNLOAD ('select * from <TABLE_NAME>_OLD') TO 's3://bucket/key/unload_' manifest;
COPY <TABLE_NAME> FROM 's3://bucket/key/unload_manifest'manifest;
END TRANSACTION;
for updating the same column in redshift this would work fine
UPDATE table_name
SET column_name = 'new_value' WHERE column_name = 'old_value'
you can have multiple clause in where by using and, so as to remove any confusion for sql
cheers!!