Toad - error during copy to another schema "Where clauses are not configured for any tables." - toad

Within the Toad Database Schema Browser I am trying to copy data from one table to another under the option Data -> Copy to another schema but, under the tables tab, I don't know how to configure the where clause to specify which destination table I want for the data copy.
Toad Version 14.1

As far as I can tell, WHERE clause is used to "copy" only those rows which satisfy the condition. You don't select tables here.
Answer to your problem is:
destination schema should already have a target table (whose name and description are exactly the same as source table's). Or,
simply check the create destination tables if needed checkbox in "Before copy" section and TOAD will create it for you

Related

Power Automate: how to catch which column was updated in Dataverse Connector

I'm starting from a "When a row is added, modified or deleted" connector, i'm passing in a switch connector that controls if the row is added, modified or deleted.
I'm then using the mail node to notify myself if a row is added, modified or deleted, in the case a row is added i have to include in the mail which fields of that row have been modified.
I can't find if this control is possible (check the row and compare it with the pre-modified version) and how to do it.
This is the embrional flow
As requested i'll try to be more detailed.
Please note that this is a POWER AUTOMATE FLOW so there is almost no code.
The CRUD connector takes 3 arguments:
-Change type (When an item is Added, Modified or Deleted)
-The table name (It's the Dataverse table name)
-The scope (Business Unit)
So i need to know if (for example in the output of this connector) there is a variable or other connector that contains which column changed and caused the trigger)
It's a question about the output or possible connectors related to the Dataverse CRUD node so there is NO CODE involved and no more "after-issue" flow specification needed to understand my request
A solution is to create a new field that keeps the current value of the original field and use trigger conditions to make your flow run only when those two fields don't match, meaning that the original field is updated and that its value has changed.

Creating An external Table With Partitions in GCP

I am trying to creating an external table with Partition below is the reference image i am using.
Here is what i am intending to do :
I have files flowing into this folder:
I need to query the external table based on the date :
eg :
select * from where _PartitionDate ='';
My specific query is what should i fill in the GCS bucket & source Data partitioning fields.
Thank you.
According to the documentation that Guillaume provided [1], you should click on the Source data partitioning box and provide the following link there:
gs://datalake-confidential-redacted/ExternalTable_Data/
Also, the Table type should be External table.
Once that is fixed, you should be able to create the table. I have reproduced the issue on my own and it is working.
[1] -
https://cloud.google.com/bigquery/docs/hive-partitioned-queries-gcs#hive-partitioning-options
This part of the documentation should help you. You need to check the Source data partitioning and then to fill in your prefix URI such as
gs://datalake-confidential-redacted/ExternalTable_Data/{dt:DATE}
And then, use this dt field as any field in your queries
SELECT *
FROM `externale-table`
WHERE dt = "2020-01-10"
Custom Wizard has an issue with this approch. Once we used Teraform scripts it has been successful. It mandates a need to mark HIVE partition to custom & once the date column is created it is added as column into the table. there by allowing to query.

Building app to upload CSV to Oracle 12c database via Apex

I'v been asked to create an app in Oracle Apex that will allow me to drop a CSV file. The file contains a list of all active physicians and associated info in my area. I do not know where to begin! Requirements:
-after dropping CSV file to apex, remove unnecessary columns
-edit data in each field, ie if phone# > 7 characters and begins with 1, remove 1. Or remove all special characters from a column.
-The CSV contains physicians of every specialty, I only want to upload specific specialties to the database table.
I have a small amount of SQL experience from Uni, and I know some HTML and CSS, but beyond that I am lost. Please help!
Began tutorial on Oracle-Apex. Created upload wizard on a dev environment
User drops CSV file to apex
Apex edits columns to remove unneccesary characteres
Only uploads specific columns from CSV file
Only adds data when column "Specialties" = specific specialties
Does not add redundant data (physician is already located in table, do nothing)
Produces report showing all new physicians added to table
Huh, you're in deep trouble as you have to do some job using a tool you don't know at all, with limited knowledge of SQL language. Yes, it is said that Apex is simple to use, but nonetheless ... you have to know at least something. Otherwise, as you said, you're lost.
See if the following helps.
there's the CSV file
create a table in your database; its description should match the CSV file. Mention all columns it contains. Pay attention to datatypes, column lengths and such
this table will be "temporary" - you'll use it every day to load data from CSV files: first you'll delete all it contains, then load new rows
using Apex "Create page" Wizard, create the "Data loading" process. Follow the instructions (and/or read documentation about it). Once you're done, you'll have 4 new pages in your Apex application
when you run it, you should be able to load CSV file into that temporary table
That's the first stage - successfully load data into the database. Now, the second stage: fix what's wrong.
create another table in the database; it will be the "target" table and is supposed to contain only data you need (i.e. the subset of the temporary table). If such a table already exists, you don't have to create a new one.
create a stored procedure. It will read data from the temporary table and edit everything you've mentioned (remove special characters, remove leading "1", ...)
as you have to skip physicians that already exist in the target table, use NOT IN or NOT EXISTS
then insert "clean" data into the target table
That stored procedure will be executed after the Apex loading process is done; a simple way to do that is to create a button on the last page which will - when pressed - call the procedure.
The final stage is the report:
as you have to show new physicians, consider adding a column (into the target table) which will be a timestamp (perhaps DATE is enough, if you'll be doing it once a day) or process_id (all rows inserted in the same process will share the same value) so that you could distinguish newly added rows from the old ones
the report itself would be an Interactive report. Why? Because it is easy to create and lets you (or end users) to adjust it according to their needs (filter data, sort rows in a different manner, ...)
Good luck! You'll need it.

Amazon Redshift: The DB is overriding created_at values with its own

I'm using a copy command to load many files into the redshift DB. The redshift's own created_at is overriding the created_at timestamp specified in the json.
COPY test
FROM s3://test/test
credentials 'my credentials'
json 'auto';
An example would be:
The json being imported
{"foo":"bar", "created_at":"2018-09-05 17:48:34"}
This saves successfully in the DB, but the json timestamp is overwritten to the current time (ie 2018-09-10 16:00:28)
How can I make redshift respect the created_at times I am giving it?
Here is excerpt from Redshift official documents to handle column with Default Value.
If a column in the table is omitted from the column list, COPY will load the column with either the value supplied by the DEFAULT option that was specified in the CREATE TABLE command, or with NULL if the DEFAULT option was not specified.
So if you skip from column list, it will always save DEFAULT. And Default are only evaluated once, meaning all the rows will have same value.
I believe this must not be your case, the only possible culprit could be your json 'auto' which may be unintentionally making Redshift ignore created_at.
Then, if you specify the DEFAULT column in, it always load it from your data file, so if you don't that records, it will consider it as null and load as null. Doesn't apply the logic of DEFAULT. For example if your data is like--
{"foo":"bar", "created_at":"2018-09-05 17:48:34"}
{"foo":"bar1","created_at":""}
{"foo":"bar2"}
{"foo":"bar3","created_at":null}
It will be populated to database like below.
foo | created_at
------+---------------------
bar2 |
bar | 2018-09-05 17:48:34
bar1 |
bar1 |
(4 rows)
SO what options you have to handle this situation?
Go with second option, where you specify the column with default values and issue an update query immediacy after loading your data. e.g.
update foo set created_at= sysdate where created_at is null;
Please keep in mind, UPDATEs are costly operations in Redshift as its DELETE+INSERT. Then what else, if possible transform your data at the source if its not costly there Or do a comparison, where populating DEFAULT suites best in your case.
I hope it helps, if not, let me know via comment, I'll refocus the answer.

In Redshift, how do you combine CTAS with the "if not exists" clause?

I'm having some trouble getting this table creation query to work, and I'm wondering if I'm running in to a limitation in redshift.
Here's what I want to do:
I have data that I need to move between schema, and I need to create the destination tables for the data on the fly, but only if they don't already exist.
Here are queries that I know work:
create table if not exists temp_table (id bigint);
This creates a table if it doesn't already exist, and it works just fine.
create table temp_2 as select * from temp_table where 1=2;
So that creates an empty table with the same structure as the previous one. That also works fine.
However, when I do this query:
create table if not exists temp_2 as select * from temp_table where 1=2;
Redshift chokes and says there is an error near as (for the record, I did try removing "as" and then it says there is an error near select)
I couldn't find anything in the redshift docs, and at this point I'm just guessing as to how to fix this. Is this something I just can't do in redshift?
I should mention that I absolutely can separate out the queries that selectively create the table and populate it with data, and I probably will end up doing that. I was mostly just curious if anyone could tell me what's wrong with that query.
EDIT:
I do not believe this is a duplicate. The post linked to offers a number of solutions that rely on user defined functions...redshift doesn't support UDF's. They did recently implement a python based UDF system, but my understanding is that its in beta, and we don't know how to implement it anyway.
Thanks for looking, though.
I couldn't find anything in the redshift docs, and at this point I'm
just guessing as to how to fix this. Is this something I just can't do
in redshift?
Indeed this combination of CREATE TABLE ... AS SELECT AND IF NOT EXISTS is not possible in Redshift (per documentation). Concerning PostgreSQL, it's possible since version 9.5.
On SO, this is discussed here: PostgreSQL: Create table if not exists AS . The accepted answer provides options that don't require any UDF or procedural code, so they're likely to work with Redshift too.