I am trying to create a view to fetch data from a bucket by excluding certain folders inside S3 on Hive. I was able to successfully create view on Athena, but couldn't do the same on Hive.
Athena View:
CREATE VIEW test
as
SELECT *
FROM TABLE_A
WHERE NOT ("$PATH LIKE '%PASSENGER_DATA%')
AND NOT ("$PATH LIKE '%CUSTOMER_DATA%');
Could you please advise how the same could be achieved on Hive?
You may actually wish to consider moving the data into its own folders. Then you could build them as a table with:
CREATE VIEW TABLE_DATA
as
SELECT *
FROM TABLE_A --("$PATH LIKE '%PASSENGER_DATA%')
UNION
SELECT *
FROM TABLE_B -- ("$PATH LIKE '%CUSTOMER_DATA%');
THis likely will also make your permission issues easier to manage.
And when needed you could easily use one table or both tables.
There isn't the same facility to filter by path. However, depending on what version you are using you could use Ranger to exclude the data so it wasn't shown.
If you must do it by view try using:
CREATE TABLE filter_out [blah blah blah]
LOCATION '%CUSTOMER_DATA%'
SELECT *
FROM TABLE_A
WHERE NOT EXISTS (SELECT ID FROM filter_out WHERE TABLE_A.ID = filter_out.ID)
Related
I've created a simple view in Redshift that is select * from source_a with no schema binding. Now I have a new table, source_b, that I would like the view to point to instead that has no new columns or datatypes.
How can I repoint the view without dropping permissions?
You run the CREATE again but include the OR REPLACE syntax.
CREATE OR REPLACE VIEW my_view AS …
WITH NO SCHEMA BINDING ;
You can retrieve the existing view definition with pg_get_viewdef.
SELECT pg_get_viewdef('my_view', true);
First off take a look at the create view documentation page - https://docs.aws.amazon.com/redshift/latest/dg/r_CREATE_VIEW.html
You have made a late-binding view which checks permissions on the tables when executed so you need to make sure you have the correct permissions on that new table - "source_b". These late-binding views are a the lookup / reference to the tables rather than a predefined link in the database. You also cannot grant column lever access to a late binding view. Otherwise permissions work the same as regular views.
To change the view without having to change the permissions on it you will want to use "create or replace view ..." (see link above) to update the definition. Just remember that the permissions of the new table will matter.
I have a table in AWS Glue which uses an S3 bucket for it's data location. I want to execute an Athena query on that existing table and use the query results to create a new Glue table.
I have tried creating a new Glue table, pointing it to a new location in S3, and piping the Athena query results to that S3 location. This almost accomplishes what I want, but
a .csv.metadata file is put in this location along with the actual .csv output (which is read by the Glue table as it reads all files in the specified s3 location).
The csv file places double quotes around each field, which ruins any fieldSchema defined in the Glue Table that uses numbers
These services are all designed to work together, so there must be a proper way to accomplish this. Any advice would be much appreciated :)
The way to do that is by using CTAS query statements.
A CREATE TABLE AS SELECT (CTAS) query creates a new table in Athena from the results of a SELECT statement from another query. Athena stores data files created by the CTAS statement in a specified location in Amazon S3.
For example:
CREATE TABLE new_table
WITH (
external_location = 's3://my_athena_results/new_table_files/'
) AS (
-- Here goes your normal query
SELECT
*
FROM
old_table;
)
There are some limitations though. However, for your case the most important are:
The destination location for storing CTAS query results in Amazon S3 must be empty.
The same applies to the name of new table, i.e. it shouldn't exist in AWS Glue Data Catalog.
In general, you don't have explicit control of how many files will be created as a result of CTAS query, since Athena is a distributed system.
However, can try this to use "this workaround" which uses bucketed_by and bucket_count fields within WITH clause
CREATE TABLE new_table
WITH (
external_location = 's3://my_athena_results/new_table_files/',
bucketed_by=ARRAY['some_column_from_select'],
bucket_count=1
) AS (
-- Here goes your normal query
SELECT
*
FROM
old_table;
)
Apart from creating a new files and defining a table associated with you can also convert your data to a different file formats, e.g. Parquet, JSON etc.
I guess you have to change ur ser-de. If you are querying csv data either opencsvserde or lazysimple serde should work for you.
Is it possible to delete data stored in S3 through an Athena query? I have some rows I have to delete from a couple of tables (they point to separate buckets in S3).
I couldn't find a way to do it in the Athena User Guide: https://docs.aws.amazon.com/athena/latest/ug/athena-ug.pdf and DELETE FROM isn't supported, but I'm wondering if there is an easier way than trying to find the files in S3 and deleting them.
You can leverage Athena to find out all the files that you want to delete and then delete them separately. There is a special variable "$path".
Select "$path" from <table> where <condition to get row of files to delete>
To automate this, you can have iterator on Athena results and then get filename and delete them from S3.
I also would like to add that after you find the files to be updated you can filter the rows you want to delete, and create new files using CTAS:
https://docs.aws.amazon.com/athena/latest/ug/ctas.html
Later you can replace the old files with the new ones created by CTAS. I think it is the most simple way to go
The answer is Yes, now you can delete the data from Athena, recently AWS has introduced ICEBERG table which supports the ACID property.
You need to create an iceberg table that will have the same data as your Athena table(where you want to delete/update records) using the below steps.
Create ICEBERG TABLE
Create table new_iceberg_table
(id double, name string)
LOCATION 'S3://path/where/you/want/to_save/'
TBLPROPERTIES (table_type='iceberg')
Load data from your Data Catalogue into this new iceberg table.
Insert into datasource.new_iceberg_table
Select * from datasource.main_athena_table.
main_athena_table = Table where you want to perform Delete/Update or ACID.
new_iceberg_table = Newly created table
Now you can insert, update, and delete the data from iceberg table.
You can also time travel using SYSTEM_TIME.
Relevant SQL's
#Update SQL =
UPDATE from datasource.new_iceberg_table set id = 04 where name='ABC' ;
#Delete SQL =
DELETE from datasource.new_iceberg_table where name='ABC' ;
#Time travel SQL (In case you want to time travel and want to see the older data)
SELECT * from datasource.new_iceberg_table for SYSTEM_TIME as of (current_timestamp - interval '10' minutes) where name='ABC'
Thank you.
I would just like to add to Dhaval's answer.
You can find out the path of the file with the rows that you want to delete and instead of deleting the entire file, you can just delete the rows from the S3 file which I am assuming would be in the Json format.
The process is to download the particular file which has those rows, remove the rows from that file and upload the same file to S3.
This just replaces the original file with the one with modified data (in your case, without the rows that got deleted). After the upload, Athena would tranform the data again and the deleted rows won't show up.
Use AWS Glue for that.
Load your data, delete what you need to delete, save the data back.
Now you can also delete files from s3 and merge data: https://aws.amazon.com/about-aws/whats-new/2020/01/aws-glue-adds-new-transforms-apache-spark-applications-datasets-amazon-s3/
AWS has announced general availability of Iceberg integration with Athena and Athena now support DMLs at raw level for Iceberg tables.
UPDATE and DELETE rows can be done using SQLs:
DELETE FROM [db_name.]table_name [WHERE predicate]
UPDATE [db_name.]table_name SET xx=yy[,...] [WHERE predicate]
For more details - AWS DOCUMENT
Note that Athena Iceberg integration now is very restrictive - nested SQLs for Deletes and Updates are NOT supported.
Below query won't work:
Delete from table1 where uniqueid in (select b.uniqueid from delete_staging b)
Shivendra Singh's answer about ICEBERG should be accepted, as ICEBERG seems to answer all needs now. But if you need to stay on the Hive table, or if your files format is JSON and you need to keep it this way, you have the following option:
Use CTAS to create new table with values you want to keep. If it's hard to phrase query this way, you can always do something like where id not in (select id ...) or select * from ... except select * from .... If your table is partitioned, and after deletion there should be more than 100 partitions, you'll need to use "insert into" technique to create up to 100 partitions per query (https://docs.aws.amazon.com/athena/latest/ug/ctas-insert-into.html).
Move (just in case) original data from S3 for partitions that were relevant for the deletion
Move data that was created by (1)
Is there a way (using existing templates) to select data from multiple tables by joining them using AWS datapipeline. My usecase requires me to combine data from multiple RDS tables to export to Redshift.
For eg. RDS has Tables School, Student, District. I want to export data like:-
select sch.Name, stu.Name, dis.Name from School sch inner join Student stu on stu.schoolid = sch.id inner join District dis on dis.id = sch.districtid;
Is there a way in AWS Datapipeline for me to select data from multiple tables ?
There is a field name "select Query" in Data node. You can write your transformation SQL which can pull out data from different tables.
Please refer below image.
Select Query in Data Node
You can create a single pipeline that will have different activities for each table that you want to replicate.
This way, you won't have to write join query to replicate multiple tables.
Answering to an old question, so that it can help others still searching for something like this.
Our s3 buckets generally have a number of sub-directories, so that the path to a bucket is something like s3:top-level-function-group/more-specific-folder/org-tenant-company-id/entityid/actual-data
We're looking into Athena to be able to query against data on that /actual-data level, but within the org-tenant-company-id, so that would have to be passed as some kind of parameter.
Or would that org-tenant-company-id be a partition?
is it possible to create an athena table that queries against this structure? And what would the s3 location be on the create table wizard? I tried it with s3:top-level-function-group/more-specific-folder/ but when it ran, I think it said something like '0 Kb data read'.
You can create a partitioned table as follows, where the partition keys are defined only in the PARTITIONED BY clause, not in the list of table fields:
CREATE EXTERNAL TABLE mydb.mytable (
id int,
stuff string,
...
)
PARTITIONED BY (
orgtenantcompanyid string
)
LOCATION 's3://mybucket/top-level-function-group/more-specific-folder/';
After creating the table, you can then load individual partitions:
ALTER TABLE mydb.mytable ADD PARTITION (orgtenantcompanyid='org1')
LOCATION 's3://mybucket/top-level-function-group/more-specific-folder/org1';
Result rows will contain the partition fields like orgtenantcompanyid.
Yes, it is possible to create tables that only use contents of a specific subdirectory.
It's normal that after creating your table you see 0kb read. That's because no data is read when you CREATE a table.
To check whether you can acutally query the data do something like:
SELECT * FROM <table_name> LIMIT 10
Partitioning only makes sense if the data structure is identical in all the different directories so that the table definition applies to all the data under the location.
And yes, it's possible to use the path structure to create partitions. However, not automatically if it's not in the right format /key=value/. You can use the path as an attribute, though, as explained here: How to get input file name as column in AWS Athena external tables