Table locking on update - sql-update

I have the following SQL to update a table
update table1 t1
inner join tbl2 t2 on on t1.ForeignId = t2.id
set t1.Qty = T2.Qty
Please note that only t1 is updated. This SQL is ran inside transaction.
After this SQL,
I try to drop the table outside transaction - 'Drop table if exists tbl2'
This hangs and the table is locked
Is there any way to use this table to update another table within transaction and I drop it afterwards before the transaction is committed?

No obviously not, tbl2 is in active use as long as the transaction is not committed. You could work with a session-temp table holding a t2 copy, or just wait with the deleting until you're finished.

Related

How to do an Update from a Select in Azure?

I need to update a second table with the results of this query:
SELECT Tag, battery, Wearlevel, SensorTime
FROM (
SELECT m.* , ROW_NUMBER() OVER (PARTITION BY TAG ORDER BY SensorTime DESC) AS rn
FROM [dbo].[TELE] m
) m2
where m2.rn = 1;
But. I had a hard time fixing the SET without messing it up. I want to have a table which has all data from last date of each TAG without duplicates.
Below code maybe you want.
UPDATE
Table_A
SET
Table_A.Primarykey = 'ss'+Table_B.Primarykey,
Table_A.AddTime = 'jason_'+Table_B.AddTime
FROM
Test AS Table_A
INNER JOIN UsersInfo AS Table_B
ON Table_A.id = Table_B.id
WHERE
Table_A.Primarykey = '559713e6-0d85-4fe7-87a4-e9ceb22abdcf'
For more details, you also can refer below posts and blogs.
1. How do I UPDATE from a SELECT in SQL Server?
2. How to UPDATE from SELECT in SQL Server

Exasol Update Table using subselect

I got this statement, which works in Oracle:
update table a set
a.attribute =
(select
round(sum(r.attribute1),4)
from table2 p, table3 r
where 1 = 1
and some joins
)
where 1 = 1
and a.attribute3 > 10
;
Now I would like to do the same statement in Exasol DB. But I got error [Code: 0, SQL State: 0A000] Feature not supported: this kind of correlated subselect (Session: 1665921074538906818)
After some research, I found out you need to write the query in following syntax:
UPDATE table a
set a.attribute = r.attribute2
FROM table a, table2 p, table3 r
where 1 = 1
and some joins
and a.attribute3 > 10;
The problem is I can't take sum of r.attribute2. So I get unstable set of rows. Is there any way to do the first query in Exasol DB?
Thanks for help guys!
Following SQL UPDATE statement will work for cases if JOIN between table1 and table2 are 1-to-1 (or if there is a 1-to-1 relation between target table and resultset of JOINs)
In this case target table val column is updated otherwise an error is returned
UPDATE table1 AS a
SET a.val = table2.val
FROM table1, table2
WHERE table1.id = table2.id;
On the other hand, if the join is causing multiple returns for single table1 rows, then the unstable error raised.
If you want to sum the column values of the multiplying rows, maybe following approach can help
First sum all rows of table2 in bases of table1 and use this sub-select as a new temp table, then use this in UPDATE FROM statement
UPDATE table1 AS a
SET a.val = table2.val
FROM table1
INNER JOIN (
select id, sum(val) val from table2 group by id
) table2
ON table1.id = table2.id;
I tried to solve the issue using two tables
In your case probably you will use table2 and table3 in the subselect statement
I hope this is the answer you were looking for

sql alchemy Update resultset of raw query

Am new to Sql Alchemy. I have a raw sql which i need to execute by passing bind parameters. Resulting rows from the query, i need to update a particular column value. How do i do this in the efficient way?
Below are the columns in my table metrics
TABLE
id,total,pass,fail,category,ref_id
query = "Select * from table where id in(select max(id) from table ...)"
sql = text(query)
result = db.engine.execute(sql, CATEGORY=category)
for row in result:
//update here
So i have this complex query, that i need to execute as an inline query. Let's say i get three rows from my query and i need to update ref_id for all the 3 rows with a values. How can i achieve this preferably bulk update.
Am using python 2.7,SQLAlchemy==0.9.9,SQLAlchemy-Utils==0.29.8

Update variable based on match in two tables

I have 2 tables, lets name them table1 and table2. Both of them have credit_id, loan_id and Date field. For some reason credit_id field needs to be updated with corresponding values from table2, linking data by Date and loan_id fields. To do so, I made a query like:
proc sql;
UPDATE a
SET a.credit_id = b.credit_id
FROM table1 a, table2 b
WHERE (a.Date = b.Date) AND (a.loan_id = b.loan_id);
quit;
According to googling, this query should work in many sql environments, but it seems that SAS is an exception here, because it seems that from part is ignored.
How to update needed field then?
I can't comment on the SQL, but you can do the same thing using a data step:
data table1;
update table1 table2(keep = date loan_id credit_id);
by date loan_id;
run;
This requires that:
No two rows in the same table have the same date and loan_id, and
Both tables are sorted/indexed by date and loan id
You need the keep on the transaction dataset in order to prevent it from updating/creating any other variables on the master dataset. There are also several other ways you could do this, e.g. using the modify or merge statements.

Alter column data type in Amazon Redshift

How to alter column data type in Amazon Redshift database?
I am not able to alter the column data type in Redshift; is there any way to modify the data type in Amazon Redshift?
As noted in the ALTER TABLE documentation, you can change length of VARCHAR columns using
ALTER TABLE table_name
{
ALTER COLUMN column_name TYPE new_data_type
}
For other column types all I can think of is to add a new column with a correct datatype, then insert all data from old column to a new one, and finally drop the old column.
Use code similar to that:
ALTER TABLE t1 ADD COLUMN new_column ___correct_column_type___;
UPDATE t1 SET new_column = column;
ALTER TABLE t1 DROP COLUMN column;
ALTER TABLE t1 RENAME COLUMN new_column TO column;
There will be a schema change - the newly added column will be last in a table (that may be a problem with COPY statement, keep that in mind - you can define a column order with COPY)
to avoid the schema change mentioned by Tomasz:
BEGIN TRANSACTION;
ALTER TABLE <TABLE_NAME> RENAME TO <TABLE_NAME>_OLD;
CREATE TABLE <TABLE_NAME> ( <NEW_COLUMN_DEFINITION> );
INSERT INTO <TABLE_NAME> (<NEW_COLUMN_DEFINITION>)
SELECT <COLUMNS>
FROM <TABLE_NAME>_OLD;
DROP TABLE <TABLE_NAME>_OLD;
END TRANSACTION;
(Recent update) It's possible to alter the type for varchar columns in Redshift.
ALTER COLUMN column_name TYPE new_data_type
Example:
CREATE TABLE t1 (c1 varchar(100))
ALTER TABLE t1 ALTER COLUMN c1 TYPE varchar(200)
Here is the documentation link
If you don't want to change the column order, an option will be creating a temp table, drop & create the new one with desired size and then bulk again the data.
CREATE TEMP TABLE temp_table AS SELECT * FROM original_table;
DROP TABLE original_table;
CREATE TABLE original_table ...
INSERT INTO original_table SELECT * FROM temp_table;
The only problem recreating the table is that you will need to grant again permissions and if the table is too bigger it will take a piece of time.
ALTER TABLE publisher_catalogs ADD COLUMN new_version integer;
update publisher_catalogs set new_version = CAST(version AS integer);
ALTER TABLE publisher_catalogs DROP COLUMN version RESTRICT;
ALTER TABLE publisher_catalogs RENAME new_version to version;
Redshift being columnar database doesn't allow you to modify the datatype directly,
however below is one approach this will change the column order.
Steps -
1.Alter table add newcolumn to the table
2.Update the newcolumn value with oldcolumn value
3.Alter table to drop the oldcolumn
4.alter table to rename the columnn to oldcolumn
If you don't want to alter the order of the columns then solution would be to
1.create temp table with new column name
copy data from old table to new table.
drop old table
rename the newtable to oldtable
One important thing create a new table using like command instead simple create.
This method works for converting an (big) int column into a varchar
-- Create a backup of the original table
create table original_table_backup as select * from original_table;
-- Drop the original table, and then recreate with new desired data types
drop table original_table;
create table original_table (
col1 bigint,
col2 varchar(20) -- changed from bigint
);
-- insert original entries back into the new table
insert into original_table select * from original_table_backup;
-- cleanup
drop original_table_backup;
You can use the statements below:
ALTER TABLE <table name --etl_proj_atm.dim_card_type >
ALTER COLUMN <col name --card_type> type varchar(30)
UNLOAD and COPY with table rename strategy should be the most efficient way to do this operation if retaining the table structure(row order) is important.
Here is an example adding to this answer.
BEGIN TRANSACTION;
ALTER TABLE <TABLE_NAME> RENAME TO <TABLE_NAME>_OLD;
CREATE TABLE <TABLE_NAME> ( <NEW_COLUMN_DEFINITION> );
UNLOAD ('select * from <TABLE_NAME>_OLD') TO 's3://bucket/key/unload_' manifest;
COPY <TABLE_NAME> FROM 's3://bucket/key/unload_manifest'manifest;
END TRANSACTION;
for updating the same column in redshift this would work fine
UPDATE table_name
SET column_name = 'new_value' WHERE column_name = 'old_value'
you can have multiple clause in where by using and, so as to remove any confusion for sql
cheers!!