Update showing rows affected but value did not change - sql-update

I am trying to update a table column processed with a simple update statement and the result is showing that the particular rows are updated but when I run the select statement the values are not changed. Below is the simple update statement I am using. The Processed Column has a datatype Char(2).
update airNewFormat
set Processed ='Y'
where TrackingNumber in ('1', '2', '3','4');
go
select *
from airNewFormat
with (nolock)
where TrackingNumber in ('1', '2', '3','4')

Related

Oracle Apex error ORA-01776: cannot modify more than one base table through a join view

I have an app in Oracle Apex 22.21. There are multiple tables (ORDERS, ORDER_ITEMS, STORES, and PRODUCTS).
ORDERS table
enter image description here
I have a Master Detail report that is editable. The main report shows the ORDERS table and the detail shows the ORDER_ITEMS table.
Report image
enter image description here
In the ORDERS table, there is a column STORE_ID which is a foreign key to the STORES table. The STORES table has a column STORE_NAME. I am able to edit the report (change the STORE_ID to another 'id' ex: 1,2,3) when the table's Source is set to the ORDERS table.
STORES table
enter image description here
STORES table data
enter image description here
I want the ORDERS table to include the STORE_NAME column referring to the STORES table. As it does not make sense for the user to enter a STORE_ID to edit a row. I want the user to be able to edit the STORE_ID by entering the STORE_NAME or by choosing an LOV. I changed the report Source Type to SQL Query and ran the below code.
select
ORDERS_LOCAL.*,
STORES.STORE_NAME
from ORDERS_LOCAL
inner join STORES
ON ORDERS_LOCAL.STORE_ID=STORES.STORE_ID
However, when I try to edit a cell, I encounter an error ORA-01776: cannot modify more than one base table through a join view
I've found a post/solution regarding this error and tried to follow the instructions. The first solution does not work in my case because I actually want the user to be able to edit the STORE_ID column by showing STORE_NAME.
enter image description here
I've tried changing and running the PL/SQL code exactly as instructed but nothing saves when I change a cell value and click save. But I don't receive any error.
BEGIN
CASE :apex$row_status
WHEN 'C'
THEN
INSERT INTO stores (store_id, store_name)
VALUES ( :p10_store_id, :p10_store_name);
INSERT INTO orders_local (order_id,
order_number,
order_date,
store_id,
full_name,
email,
city,
state,
zip_code,
credit_card,
order_items
)
VALUES ( :p10_order_id,
:p10_order_number,
:p10_order_date,
:p10_store_id,
:p10_full_name,
:p10_email,
:p10_city,
:p10_state,
:p10_zip_code,
:p10_credit_card,
:p10_order_items);
WHEN 'U'
THEN
UPDATE orders_local
SET order_id = :p10_order_id,
order_number = :p10_order_number,
order_date = :p10_order_date,
store_id = :p10_store_id,
full_name = :p10_full_name,
email = :p10_email,
city= :p10_city,
state= :p10_state,
zip_code= :p10_zip_code,
credit_card= :p10_credit_card,
order_items= :p10_order_items
WHERE order_id = :p10_order_id;
UPDATE stores
SET store_name = :p10_store_name
WHERE store_id = :p10_store_id;
WHEN 'D'
THEN
DELETE orders_local
WHERE order_id = :p10_order_id;
DELETE stores
WHERE store_id = :p10_store_id;
END CASE;
END;
Take a step back. The "report that is editable" is an interactive grid. If the report is display only, then you can use any SQL to display data. However, if it is editable then the SQL statement is used to update the rows as well. The statement
select
ORDERS_LOCAL.*,
STORES.STORE_NAME
from ORDERS_LOCAL
inner join STORES
ON ORDERS_LOCAL.STORE_ID=STORES.STORE_ID
Cannot be used to update the store_id in the orders_local table. Currently you're trying to work around this by using custom code for the update but that is overcomplicating things. So, take a step back and restart.
The query for the interactive grid should be
select
*
from ORDERS_LOCAL
Define a List of Values to display the select list for Stores. The query for that list of values is
select
store_id as return_value,
store_name as display_value
from stores
In the interactive grid us this list of values for the store_id column.
That is all there is to it. This will allow you to use the native process for handling the IG updates.

Redshift Pivot Function

I've got a similar table which I'm trying to pivot in Redshift:
UUID
Key
Value
a123
Key1
Val1
b123
Key2
Val2
c123
Key3
Val3
Currently I'm using following code to pivot it and it works fine. However, when I replace the IN part with subquery it throws an error.
select *
from (select UUID ,"Key", value from tbl) PIVOT (max(value) for "key" in (
'Key1',
'Key2',
'Key3
))
Question: What's the best way to replace the IN part with sub query which takes distinct values from Key column?
What I am trying to achieve;
select *
from (select UUID ,"Key", value from tbl) PIVOT (max(value) for "key" in (
select distinct "keys" from tbl
))
From the Redshift documentation - "The PIVOT IN list values cannot be column references or sub-queries. Each value must be type compatible with the FOR column reference." See: https://docs.aws.amazon.com/redshift/latest/dg/r_FROM_clause-pivot-unpivot-examples.html
So I think this will need to be done as a sequence of 2 queries. You likely can do this in a stored procedure if you need it as a single command.
Updated with requested stored procedure with results to a cursor example:
In order to make this supportable by you I'll add some background info and description of how this works. First off a stored procedure cannot produce results strait to your bench. It can either store the results in a (temp) table or to a named cursor. A cursor is just storing the results of a query on the leader node where they wait to be fetched. The lifespan of the cursor is the current transaction so a commit or rollback will delete the cursor.
Here's what you want to happen as individual SQL statements but first lets set up the test data:
create table test (UUID varchar(16), Key varchar(16), Value varchar(16));
insert into test values
('a123', 'Key1', 'Val1'),
('b123', 'Key2', 'Val2'),
('c123', 'Key3', 'Val3');
The actions you want to perform are first to create a string for the PIVOT clause IN list like so:
select '\'' || listagg(distinct "key",'\',\'') || '\'' from test;
Then you want to take this string and insert it into your PIVOT query which should look like this:
select *
from (select UUID, "Key", value from test)
PIVOT (max(value) for "key" in ( 'Key1', 'Key2', 'Key3')
);
But doing this in the bench will mean taking the result of one query and copy/paste-ing into a second query and you want this to happen automatically. Unfortunately Redshift does allow sub-queries in PIVOT statement for the reason given above.
We can take the result of one query and use it to construct and run another query in a stored procedure. Here's such a store procedure:
CREATE OR REPLACE procedure pivot_on_all_keys(curs1 INOUT refcursor)
AS
$$
DECLARE
row record;
BEGIN
select into row '\'' || listagg(distinct "key",'\',\'') || '\'' as keys from test;
OPEN curs1 for EXECUTE 'select *
from (select UUID, "Key", value from test)
PIVOT (max(value) for "key" in ( ' || row.keys || ' )
);';
END;
$$ LANGUAGE plpgsql;
What this procedure does is define and populate a "record" (1 row of data) called "row" with the result of the query that produces the IN list. Next it opens a cursor, whose name is provided by the calling command, with the contents of the PIVOT query which uses the IN list from the record "row". Done.
When executed (by running call) this function will produce a cursor on the leader node that contains the result of the PIVOT query. In this stored procedure the name of the cursor to create is passed to the function as a string.
call pivot_on_all_keys('mycursor');
All that needs to be done at this point is to "fetch" the data from the named cursor. This is done with the FETCH command.
fetch all from mycursor;
I prototyped this on a single node Redshift cluster and "FETCH ALL" is not supported at this configuration so I had to use "FETCH 1000". So if you are also on a single node cluster you will need to use:
fetch 1000 from mycursor;
The last point to note is that the cursor "mycursor" now exists and if you tried to rerun the stored procedure it will fail. You could pass a different name to the procedure (making another cursor) or you could end the transaction (END, COMMIT, or ROLLBACK) or you could close the cursor using CLOSE. Once the cursor is destroyed you can use the same name for a new cursor. If you wanted this to be repeatable you could run this batch of commands:
call pivot_on_all_keys('mycursor'); fetch all from mycursor; close mycursor;
Remember that the cursor has a lifespan of the current transaction so any action that ends the transaction will destroy the cursor. If you have AUTOCOMMIT enable in your bench this will insert COMMITs destroying the cursor (you can run the CALL and FETCH in a batch to prevent this in many benches). Also some commands perform an implicit COMMIT and will also destroy the cursor (like TRUNCATE).
For these reasons, and depending on what else you need to do around the PIVOT query, you may want to have the stored procedure write to a temp table instead of a cursor. Then the temp table can be queried for the results. A temp table has a lifespan of the session so is a little stickier but is a little less efficient as a table needs to be created, the result of the PIVOT query needs to be written to the compute nodes, and then the results have to be sent to the leader node to produce the desired output. Just need to pick the right tool for the job.
===================================
To populate a table within a stored procedure you can just execute the commands. The whole thing will look like:
CREATE OR REPLACE procedure pivot_on_all_keys()
AS
$$
DECLARE
row record;
BEGIN
select into row '\'' || listagg(distinct "key",'\',\'') || '\'' as keys from test;
EXECUTE 'drop table if exists test_stage;';
EXECUTE 'create table test_stage AS select *
from (select UUID, "Key", value from test)
PIVOT (max(value) for "key" in ( ' || row.keys || ' )
);';
END;
$$ LANGUAGE plpgsql;
call pivot_on_all_keys();
select * from test_stage;
If you want this new table to have keys for optimizing downstream queries you will want to create the table in one statement then insert into it but this is quickie path.
A little off-topic, but I wonder why Amazon couldn't introduce a simpler syntax for pivot. IMO, if GROUP BY is replaced by PIVOT BY, it can give enough hint to the interpreter to transform rows into columns. For example:
SELECT partname, avg(price) as avg_price FROM Part GROUP BY partname;
can be written as:
SELECT partname, avg(price) as avg_price FROM Part PIVOT BY partname;
Even multi-level pivoting can also be handled in the same syntax.
SELECT year, partname, avg(price) as avg_price FROM Part PIVOT BY year, partname;

clickhouse how to guarantee one data row per a pk(sorting key)?

I am struggling with clickhouse to keep unique data row per a PK.
I choose this Column base DB to express statistics data quickly and very satisfied with its speed. However, got some duplicated data issue here.
The test table looks like...
CREATE TABLE test2 (
`uid` String COMMENT 'User ID',
`name` String COMMENT 'name'
) ENGINE ReplacingMergeTree(uid)
ORDER BY uid
PRIMARY KEY uid;
Let's presume that I am going to use this table to join for display names(name field in this table). However, I can insert many data as I want in same PK(Sorting key).
For Example
INSERT INTO test2
(uid, name) VALUES ('1', 'User1');
INSERT INTO test2
(uid, name) VALUES ('1', 'User2');
INSERT INTO test2
(uid, name) VALUES ('1', 'User3');
SELECT * FROM test2 WHERE uid = '1';
Now, I can see 3 rows with same sorting key. Is there any way to make key unique, at least, prevent insert if the key exists?
Let's think about below scenario
tables and data are
CREATE TABLE blog (
`blog_id` String,
`blog_writer` String
) ENGINE MergeTree
ORDER BY tuple();
CREATE TABLE statistics (
`date` UInt32,
`blog_id` String,
`read_cnt` UInt32,
`like_cnt` UInt32
) ENGINE MergeTree
ORDER BY tuple();
INSERT INTO blog (blog_id, blog_writer) VALUES ('1', 'name1');
INSERT INTO blog (blog_id, blog_writer) VALUES ('2', 'name2');
INSERT INTO statistics(date, blog_id, read_cnt, like_cnt) VALUES (202007, '1', 10, 20);
INSERT INTO statistics(date, blog_id, read_cnt, like_cnt) VALUES (202008, '1', 20, 0);
INSERT INTO statistics(date, blog_id, read_cnt, like_cnt) VALUES (202009, '1', 3, 1);
INSERT INTO statistics(date, blog_id, read_cnt, like_cnt) VALUES (202008, '2', 11, 2);
And here is summing query
SELECT
b.writer,
a.read_sum,
a.like_sum
FROM
(
SELECT
blog_id,
SUM(read_cnt) as read_sum,
SUM(like_cnt) as like_sum
FROM statistics
GROUP BY blog_id
) a JOIN
(
SELECT blog_id, blog_writer as writer FROM blog
) b
ON a.blog_id = b.blog_id;
At this moment it works fine, but if there comes a new low like
INSERT INTO statistics(date, blog_id, read_cnt, like_cnt) VALUES (202008, '1', 60, 0);
What I expected is update low and sum of the "name1"'read_sum is 73. but it shows 93 since it allows duplicated insert.
Is there any way to
prevent duplicated insert
or set unique guaranteed PK in table
Thanks.
One thing that comes to mind is ReplacingMergeTree. It won't guarantee absence of duplication right away, but it it will do so eventually. As docs state:
Data deduplication occurs only during a merge. Merging occurs in the
background at an unknown time, so you can’t plan for it. Some of the
data may remain unprocessed.
Another approach that i personally use is introducing another column named, say, _ts - a timestamp when row was inserted. This lets you track changes and with help of clickhouse's beautiful limit by you can easily get last version of a row for given pk.
CREATE TABLE test2 (
`uid` String COMMENT 'User ID',
`name` String COMMENT 'name',
`_ts` DateTime
) ENGINE MergeTree(uid)
ORDER BY uid;
Select would look like this:
SELECT uid, name FROM test2 ORDER BY _ts DESC LIMIT 1 BY uid;
In fact, you don't need a pk, just specify any row/rows in limit by that you need rows to be unique by.
Besides ReplacingMergeTree which runs deduplication asynchronously, so you can have temporarily duplicated rows with the same pk, you can use CollapsingMergeTree or VersionedCollapsingMergeTree.
With CollapsingMergeTree you could do something like this:
CREATE TABLE statistics (
`date` UInt32,
`blog_id` String,
`read_cnt` UInt32,
`like_cnt` UInt32,
`sign` Int8
) ENGINE CollapsingMergeTree(sign)
ORDER BY tuple()
PRIMARY KEY blog_id;
The only caveat is on every insert of a duplicated PK you have to cancel the previous register, something like this:
# first insert
INSERT INTO statistics(date, blog_id, read_cnt, like_cnt, sign) VALUES (202008, '1', 20, 0, 1);
# cancel previous insert and insert the new one
INSERT INTO statistics(date, blog_id, read_cnt, like_cnt, sign) VALUES (202008, '1', 20, 0, -1);
INSERT INTO statistics(date, blog_id, read_cnt, like_cnt, sign) VALUES (202008, '1', 11, 2, 1);
I do not think this is a solution for the problem, but at least I detour above problem in this way in the perspective of business.
Since clickhouse officially does not support modification of table data.(They provide ALTER TABLE ... UPDATE | DELETE, but eventually those will rewrite the table) I split the table into small multiple partitions(In my case, 1 partition has about 50,000 data) and if duplicated data comes, 1) drop the partition 2) re-insert data again. In above case, I alway execute ALTER TABLE ... DROP PARTITION statement before insert.
I also have tried ReplacingMergeTree, but data duplication still occurred.(Maybe I do not understand how to use the table but I gave a single sorting key - and when I insert duplicated data there are multiple data in same sorting key)

PSQL Batch Update Statement

I am trying to update a bunch of rows in a psql database, and would like to do it in one generated sql statement if possible. I am able to generate batch insert statements which look similar to this:
INSERT INTO my_table (col1, col2, col3)
VALUES (v11, v12, v13), (v21, v22, v23), ...
However I am not sure how to do this with an update statement instead. I could do one SQL statement for each row I want to update but this seems unnecessary and slower than having just one statement
P.S. all rows have an id column so I can reference them through that
Near Bottom of Page
I was able to find the answer at the above link. looks similar to
UPDATE my_table
SET x = case
when y = '1' then '1.1'
when y = '2' then '1.2'
end
WHERE y='1' OR y='2';

Update each column's value pertaining to a specific row number? sqlite sqlite3

I and terrible with databases and sql, but know the basics and this is a bit beyond basics I believe. I need a way to update a an entire rows values, where the row is selected by row number, because I have no guarantee of a value pertaining to any of the columns in that row.
Example DB Table:
Col1 Col2 Col3
a b c
d e f
g h i
So I need a way to for say
Update exampleTableName
Set Col1 = 'j', Col2 = 'k', Col3 = 'l'
Where rowNumber = 2
I am writing this in c++ and using sqlite3, but just the Query should do for me.
Thanks
Edit:
I know this sounds ridiculous and many will ask why the table is set up this way but I am not in control of that. All I can say is that I am able to figure out a row number and need to update each columns value according to what is stored in another variable. Normally these variables will hold the value I want to look up, but it isn't ever guaranteed. So I can only recall the row they want and store it at the time of the look up and then update (the last selected row) by keeping track up its index and updating its values according to w/e is in those variables, which could be the same or could have changed.
By default, every row in SQLite has a special column, usually called the "rowid", that uniquely identifies that row within the table. You can use that to select your row.
However if the phrase "WITHOUT ROWID" is added to the end of a CREATE TABLE statement, then the special "rowid" column is omitted. There are sometimes space and performance advantages to omitting the rowid.
Update exampleTableName Set Col1 = 'j', Col2 = 'k', Col3 = 'l' Where rowid= 2
your table needs some form of a primary key... at a minimum you should add a column named Key INT IDENTITY(1,1) this will give each row a number.