I'm practicing on restoring tables from recyclebin in Oralce 19.
I've already known about the FLASHBACK TABLE statement. However, it does restore the last version of a table only.
What if I want to restore an earlier state of my table?
Here's what I've already found and tried:
By this query I get names of the deleted instances of my table and when these were deleted.
select object_name, droptime from recyclebin where original_name = 'TEST';
Then I copy an object_name of the instance I need to my flashback statement:
flashback table BIN$USnbm7YhQBu9TbSyOdqyKA==$0 TO BEFORE DROP;
this sentence gave me ORA-00905: missing keyword.
Is there a way to correct the last statement or the whole method isn't working?
Recycle bin objects have strange names and must be surrounded with quotation marks like this:
flashback table "BIN$USnbm7YhQBu9TbSyOdqyKA==$0" TO BEFORE DROP;
Related
I have created BQ table with two columns:
col1 string nullable
col2 string required
And next populated table with some dummy data:
insert into `test.test` values ('val1', 'val2')
insert into `test.test` values (null, 'val2')
insert into `test.test` values ('val1', 'val2')
After that I dropped single column:
Alter table `test.test` drop column col2
And after that I would like to add new column:
alter table `test.test` add column col3 string
HINT: New column that I am trying to add is named differently than the one I deleted.
BQ raises an error to me:
Column `col2` was recently deleted in the table `test`. Deleted column name is reserved for up to the time travel duration, use a different column name instead.
It seems to be not right. I know that deleted columns is still kept somewhere in BQ world, but I am trying to add column with different name than the deleted one.
Any idea?
This is a known issue and is being worked upon by Bigquery Engineering team. You may click on +1 to bring more attention on the issue and STAR the issue so that you can be notified for updates.
Meanwhile, as a workaround you try adding new field via
BigQuery UI
bq command line
API or Client Library
Yesterday I scheduled daily the overwriting of a table. The new table will be partitioned as well as the overwritten one... It did not run at the corresponding time, nor gave an error... It just did not started.
My feeling is that it has to be with the partitioning option. To mention that the casting of the field date_formatted that will be used as partition field works fine.
As far as I know, when scheduling a query you can't use the create or replace table T partitioned by column C as select...
You starts from the select... clause, as shows in the image, and I don't know if the problem comes from here.
PS: I had no troubles scheduling the appending to a partitioned by day table with this same procedure.
the destination table is in the same dataset.
if the very same query is scheduled to deliver the results in a table with the same name, but in a different dataset (located in the same project), it works.
by the way, the input table and the output table never were the same.
I have a dataset with 6 character variables including Day5,Day6,Day7,City1,City2,City3.
I am trying to rename Day5 which was extracted as i__Day5 after importing txt file into SAS. The variable i__day5 is not getting renamed to day5 and so it does not shows any observation for this variable.
data subset ;
set subset ;
rename i__Day5 = Day5;
run;
Thanks.
As Tom mentioned your problem likely stems from overwriting the original table with the modified data, and then trying to submit your code to run again.
It will work the first time when the variable i__Day5 exists, but on running it a second time, the variable will no longer exist as it has already been renamed.
To avoid this issue never re-use table names. This code would be better:
data subset2 ;
set subset ;
rename i__Day5 = Day5;
run;
Space is cheap so there's no real downside to doing this, plus it gives you an easy way to compare the table before/after running the code.
The only other issue that this could be is that you are viewing field labels and not field names. As samkart mentions, you can verify the actual field names by running a proc contents against your table.
i need to use a append object after a series of join that have a conditional run... So the join step may be not execute if the condition is not verified and his work physical dataset will not be created.
The problem is that the append step take an error if one o more input physical dataset are not created.
Is there a smart way to create a physical empty table from a metadata structure of the works table of the joins or to use the append with some non-created datasets?
The create table with the list of all field is not a real solution because i've to replicate it per 8 different joins and then replicate the job 10 times...
Thanks to all
Roberto
Thank you for your comments.
What you should do:
Amend your conditional node so that it would on positive condition to create a global macro variable with value of MAX. On negative condition to create the same variable with value of 0.
Replace offending SQL step with "CREATE TABLE" node
In the options for "CREATE TABLE", specify macro variable for "MAXIMUM OUTPUT ROWS (OUTOBS)". See the picture below for example of those options.
So now when your condition is not met, you will always end up with an empty table. When condition is met, the step executes normally.
I must say my version of DI Studio is a bit old. In my version SQL node doens't allow passing macro variables to SQL options, only integers can be typed in. Check if your version allows it because if it does, then you can amend existing SQL step and avoid replacing it with another node.
One more thing, you will get a warning when OUTOBS options is less then the resulting would be dataset.
Let me know if you have any questions.
See the picture for create table options:
At the end i've created another step that extract 0 row from the source table by the condition 1=0 in the where tab. In this way i have a empty table that i can use with a data/set in the post sql of the conditional run if the work table of the join does not exist.
This is not a solution but a valid work around.
This is my first post to stackoverflow. Your forum has been SO very helpful as I've been learning Python and Postgres on the fly for the last 6 months, that I haven't needed to post yet. But this task is tripping me up and I figure I need to start earning reputation points:
I am creating a python script for backing up data into an SQL database daily. I have a CSV file with an entire months worth of hourly data, but I only want to select a single day of data from from the file and copy those select rows into my database. Am I able to query the CSV table and append the query results into my database? For example:
sys.stdin = open('file.csv', 'r')
cur.copy_expert("COPY table FROM STDIN
SELECT 'yyyymmddpst LIKE 20140131'
WITH DELIMITER ',' CSV HEADER", sys.stdin)
This code and other variations aren't working out - I keep getting syntax errors. Can anyone help me out with this task? Thanks!!
You need create temporary table at first:
cur.execute('CREATE TEMPORARY TABLE "temp_table" (LIKE "your_table") WITH OIDS')
Than copy data from csv:
cur.execute("COPY temp_table FROM '/full/path/to/file.csv' WITH CSV HEADER DELIMITER ','")
Insert necessary records:
cur.execute("INSERT INTO your_table SELECT * FROM temp_table WHERE yyyymmddpst LIKE 20140131")
And don't forget do conn.commit()
Temp table will destroy after cur.close()
You can COPY (SELECT ...) TO an external file, because PostgreSQL just has to read the rows from the query and send them to the client.
The reverse is not true. You can't COPY (SELECT ....) FROM ... . If it were a simple SELECT PostgreSQL could try to pretend it was a view, but really it doesn't make much sense, and in any case it'd apply to the target table, not the source rows. So the code you wrote wouldn't do what you think it does, even if it worked.
In this case you can create an unlogged or temporary table, copy the full CSV to it, and then use SQL to extract just the rows you want, as pointed out by Dmitry.
An alternative is to use the file_fdw to map the CSV file as a table. The CSV isn't copied, it's just read on demand. This lets you skip the temporary table step.
From PostgreSQL 12 you can add a WHERE clause to your COPY statement and you will get only the rows that match the condition.
So your COPY statement could look like:
COPY table
FROM '/full/path/to/file.csv'
WITH( FORMAT CSV, HEADER, DELIMITER ',' )
WHERE yyyymmddpst LIKE 20140131