how do I drop a Table and recreate it via informatica Pre sql? - informatica

We are Trying to Drop and recreate a table via Informatica Mapping using the Pre_Sql option. Informatica throws an Insufficient privilege error even though we have granted privileges to the Informatica user, is it possible to drop and create a table Via pre SQL or is there any other method to accomplish this issue.

Related

How to create a private temporary table in Oracle 19?

I am running Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production in a Docker container.
I created a user with CREATE SESSION and CREATE TABLE system privileges. User also has QUOTA UNLIMITED.
CREATE USER airflow IDENTIFIED BY pass;
GRANT CREATE SESSION TO airflow;
GRANT CREATE TABLE TO airflow;
ALTER USER airflow QUOTA UNLIMITED ON USERS;
With that user I attempted to create a private temporary table with the following query:
CREATE PRIVATE TEMPORARY TABLE ora$ppt_temp1 (
name varchar2(7),
age int,
employed int
) ON COMMIT PRESERVE DEFINITION;
I am accessing the database on Python 3.9.13 using SQLAlchemy 1.3.24.
I get the following error:
sqlalchemy.exc.DatabaseError: (cx_Oracle.DatabaseError) ORA-00903: invalid table name
I also get ORA-00903 when running the query from DBeaver. I have checked the private_temp_table_prefix and it is set to the default value of ORA$PTT_. I have read through the Oracle 19c documentation and several stack overflow questions and cannot see what I am missing here.
I suspect that there is some privilege I need to add or modify to make this work.
As stated this was a typo in the table name.

Define AWS database to use in Custom SQL?

I am creating a dataset in AWS Quicksight using custom SQL which I prepare/test in Athena. However, unless I define each join/table "databasename".table, the QS custom SQL fails. I have tried the below but it has failed. Is it possible to instruct the query to fun against a specific DB at the beginning of the query?
USING AwsDataCatalog."databasename"
In the data preparation, in the custom SQL page, on the left pane, you should be able to choose the database name (Schema).
If you do not set that, then it will use Athena's default schema so you have to fully qualify all table names.

Restrict access to a table in SQL Lab in Superset

I have database with many tables. Users have full access to this database and tables to create various charts and dashboards. They use SQL Lab extensively to write custom queries.
However I added a sensitive data in a separate table that needs to be accessed only by few set of users. How can I achieve?
I tried ROW-LEVEL-SECURITY feature.
However, this affects only to Virtual Tables created by Superset. I want to restrict during direct SQL Lab access also.
Possible Solution:
Create ACL at database level and create a seperate connection in Superset.
Cons - This requires a duplicate connection to same database twice.
Ideal solution:
To restrict SQL Lab access to specific tables at superset level. e.g Superset should check User roles and ACLs and decide upon a table can be queried or not.
Is this possible?
Maybe consider implement proper access control to your data with Ranger and from superset impersonate login user.

How can i access metadata db of GCP Composer Airflow server?

I have created one Composer in gcp project. I want to access the Metadatadb of Airflow which runs at background on Cloud SQL.
How can i access that?
Also i want to create one table inside that metadatadb which i will be using to store some data query by one of airflow dag. Is it ok to create any table inside that metadatadb or that metadatadb is only for airflow server use?
You can access Airflow internal DB via UI using Data Profiling -> Ad Hoc Query
There you can see all the tables with a SQL query like :
SHOW tables;
I wouldn't recommand creating a new table or manually inserting rows into existing tables thought.
You should also be able to access this DB in your DAGs operators and sensors by using airflow-db connexion.

Migrate data to SQL DW for multiple tables

I'm currently using Azure Data Factory to move over data from an Azure SQL database to an Azure DW instance.
This works fine for one table, but I have a lot of tables I'd like to move over. Using Azure Data Facory, it looks like I need to create a set of Source/Sink datasets and pipelines for every table in the database.
Is there a way to move multiple tables across without have to set up each table in the manner described above?
The copy operation allows you to select multiple tables to move in a single pipeline. From the Azure SQL Data Warehouse portal you can follow this process to setup a multi-table pipeline:
Click on the Load Data button
Select Azure Data Factory
Create a new data factory or use an existing one - ensure that the Load Data select is chosen
Select the Run once now option
Choose your Azure SQL Database source and enter the credentials
On the Select Tables screen, select multiple tables
Continue the Pipeline, save and execute