how to resolve 'Update operation attempted on non-updatable query'? - sql-update

I am working on a sybase-iq server Sybase IQ/15.4.0.3014
I have a working query to update one field of a table as below
update table1
set a.field1= b.some_value
from table1 a,
table2 b
where a.id = b.id
This is working fine when I execute it from a sql session. When it is being called from a high level application, I am getting the below error for the above query
SQL Exception code is 7301
Update operation attempted on non-updatable query
I am not able to find why I am getting this error. Is there any solution to amend the query. Searching on the internet is not helping much.
Have anyone come across such issue?

You attempted an insert, update, or delete operation on a query that is implicitly read-only. You're trying to update system table or table which cannot be changed in that manner.
link

down vote
I suggest to double check that table_name is actually a table, but not a view. If it is a view, you may see its definition with sp_helptext command, such as
sp_helptext 'view_name'
or
sp_helptext 'schema_name.view_name'

Related

insert into query not working in coldfusion?

Please check my query below
<cfquery datasource="quackit" name="insertuser">
INSERT INTO user (user_id, group_id)
VALUES (#form.usr_id#,#form.access_flg#)
</cfquery>
But I am getting Error executing database query
I am able to fetch the data ,Please correct me where I am doing wrong
Use cfqueryparam. You are wide open to sql injection right now. The error probably comes from one of your form variables being blank, which you will need to check for either way.
We can't give more precise advice without details on the specific database error. If it's not due to the form feeding in bad data, it has something to do with your database structure (perhaps you are trying to insert an invalid foreign key, or leaving off a required field).

Can I disable QUERY OPTIMIZATION for MQT in DB2 LUW without recreating it?

UPDATE from 2021
This question is no longer actual for me.
It was a short period when I worked with DB2 and I don't know how it's in recent versions.
The problem was: I could not test effect of MQT without rebuilding it.
Which was not practical when you deal with multi-Gb data.
I did not found solution earlier, I don't know why question was minused.
SO recommends to not delete questions with answers and who knows: maybe somebody finally answers that.
I have a MQT in DB2 10.5 LUW:
CREATE TABLE MyMQT AS(
SELECT * FROM MyTable
WHERE
ServerName = 'COL'
AND LASTOCCURRENCE > TIMESTAMP '2015-12-21 00:00:00'
)
DATA INITIALLY DEFERRED REFRESH immediate
ENABLE QUERY OPTIMIZATION
MAINTAINED BY SYSTEM;
I want to DISABLE QUERY OPTIMIZATION without DROP/CREATE.
I found "Altering materialized query table properties" https://www-01.ibm.com/support/knowledgecenter/SSEPEK_10.0.0/com.ibm.db2z10.doc.admin/src/tpc/db2z_changemqtableattribs.html
but this is for z/OS.
If I try:
ALTER TABLE MyMQT DISABLE QUERY OPTIMIZATION;
I get:
DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0104N An unexpected token "TABLE" was found following "ALTER ". Expected
tokens may include: "VIEW". SQLSTATE=42601
Documentation for LUW explains how to change MQT to regular table and otherwise.
Can I alter MQT options in DB2 LUW without recreating it?
Edit
It's quite strange, but looks like this is impossible to achieve in DB2 LUW.
As data_henrik mentioned, it's possible to disable/enable optimization for all MQTs.
I accept his answer although it's not quite what I was looking for.
No personal experience with it, but you could:
SET CURRENT MAINTAINED TABLE TYPES FOR OPTIMIZATION = NONE
This would tell DB2 to not consider any MQT. Later on you would enable query optimization by setting that variable to "system" (the default) or something else. That statement is documented here.
Try this:
refreshable-table-options
|--●--DATA INITIALLY DEFERRED--●--REFRESH--+-DEFERRED--+--●----->
'-IMMEDIATE-'
.-ENABLE QUERY OPTIMIZATION--.
>--+----------------------------+--●---------------------------->
'-DISABLE QUERY OPTIMIZATION-'

Oracle Pro*C for insert with a sub select query causing ORA-01403: no data found

I am using C++ code with embedded Pro*C (Version: 11.2.0.3.0) for Oracle DB. I am running a bulk insert clause as below:
insert int TBL1 (col1, col2)
select a.col1, b.col2 from TBL2 a, TBL3 b
where a.col1 = :v and a.col2 = b.col2
I run this query for a set of records to be inserted, and binding values for :v in place.
However, while some records could be inserted, some failed with
ORA-01403: no data found
I see from sqlca.sqlerrd[2], the number of rows that could be inserted. So, I know M out N records could be inserted. Now, I would like to know which records did fail, so I need a clue of list of all a.col1 values that could cause this failure.
Is there any way out? Any clue or direction would be very helpful.
This is a bit long for a comment.
The error you are referencing is a PL/SQL error, documented here. This is not an error that an insert would normally produce.
My one guess is that the table has an insert trigger and this trigger is causing the problem.
It is also possible that your code is in a larger block, and something else in the block is causing the error.

In Redshift, how do you combine CTAS with the "if not exists" clause?

I'm having some trouble getting this table creation query to work, and I'm wondering if I'm running in to a limitation in redshift.
Here's what I want to do:
I have data that I need to move between schema, and I need to create the destination tables for the data on the fly, but only if they don't already exist.
Here are queries that I know work:
create table if not exists temp_table (id bigint);
This creates a table if it doesn't already exist, and it works just fine.
create table temp_2 as select * from temp_table where 1=2;
So that creates an empty table with the same structure as the previous one. That also works fine.
However, when I do this query:
create table if not exists temp_2 as select * from temp_table where 1=2;
Redshift chokes and says there is an error near as (for the record, I did try removing "as" and then it says there is an error near select)
I couldn't find anything in the redshift docs, and at this point I'm just guessing as to how to fix this. Is this something I just can't do in redshift?
I should mention that I absolutely can separate out the queries that selectively create the table and populate it with data, and I probably will end up doing that. I was mostly just curious if anyone could tell me what's wrong with that query.
EDIT:
I do not believe this is a duplicate. The post linked to offers a number of solutions that rely on user defined functions...redshift doesn't support UDF's. They did recently implement a python based UDF system, but my understanding is that its in beta, and we don't know how to implement it anyway.
Thanks for looking, though.
I couldn't find anything in the redshift docs, and at this point I'm
just guessing as to how to fix this. Is this something I just can't do
in redshift?
Indeed this combination of CREATE TABLE ... AS SELECT AND IF NOT EXISTS is not possible in Redshift (per documentation). Concerning PostgreSQL, it's possible since version 9.5.
On SO, this is discussed here: PostgreSQL: Create table if not exists AS . The accepted answer provides options that don't require any UDF or procedural code, so they're likely to work with Redshift too.

How to reindex or fix rowid after a row is deleted sqlite?

Alright I am in a rather difficult situation, or at least I think so anyway. I have been doing some research on how to fix my problem but have really come up empty handed.
I need to be able to reindex the rowid of my table after I delete a row. That way at any given time when I want to update or index a row by the rowid it is accessing the correct one.
Now for those of you asking why. Basically I am interfacing a "homebrewed" db that was programmed in C and is really just a bunch of memory locations all accessed like they were a db table. So what I'm trying to say is they can look up a row by searching for a value in the table, or by simply saying i want row 6. Lastly the table could consist of really anything, and any values which means they dont create a column as an index and ultimately the only thing for me to index their row by row number is the rowid to my knowledge.
So I have found that VACUUM would do what I want or need but it appears that the system that database is in isn't giving sqlite privileges to write so when VACUUM is run it comes back with and error. (ERROR 14 or Unable to open the database file) (I also know that my db is open so that isn't the issue but not having write privileges is the only reason I can come up with) I have also read some stuff about the auto increment or something like that but didn't really understand/think that was going to be able to fix my problem.
Any suggestions or ideas from the sqlite or database geniuses out that would be appreciated.
Not sure if I have understood completely your problem, but if you can use SQL code maybe you can write a query to update the IDs (assuming they are in dense order).
You can use a query like this:
UPDATE t1
SET id = (SELECT rank
FROM (SELECT id,
(
SELECT count()+1
FROM (SELECT DISTINCT id
FROM t1 AS t
WHERE t.id < t1.id
)
) rank
FROM t1
) AS sub
WHERE sub.id = t1.id
);
You can check my demo in SQLFiddler. In this demo you will see the result of the DELETE and UPDATE statements (to simulate your case) if you run all queries together.