I am running Athena query and trying to get a difference between two tables as below:
CREATE TABLE TableName
WITH (
format = 'TEXTFILE',
write.compression = 'NONE',
external_location = s3Location)
AS
SELECT * FROM currentTable
EXCEPT
SELECT * FROM previousTable;
I would like to store the result of query into specified s3Location as S3 multipart file (file with extension: .0000_part_00) and as TSV file format. How can I achieve it?
I am trying to do it programmatically in Java.
Related
I need to query some data in AWS Athena. The source data in s3 is compressed json .gz format. It was created with the parameter
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
If I just do 'select *' there's one column like this:
{userid={s=my_email#gmail.com}, timestamp=2022-07-21 10:00:00, appID={s=greatApp}, etc.}
I am trying to query like this:
with dataset as
(select * FROM "default"."my_table" limit 10)
select json_extract(item, '$.userid') as user
from dataset;
But getting an error:
Expected: json_extract(varchar(x), JsonPath) , json_extract(json, JsonPath)
Is there something wrong with my query?
I got it. You just use "dot" notation to access the keys:
select item.userid.s as user,
item.timestamp,
item.appID.s as appID
from my_table limit 10;
I am generating a gz file that is csv by using below query
CREATE TABLE newgziptable4
WITH (
format = 'TEXTFILE',
write_compression = 'GZIP',
field_delimiter = ',',
external_location = 's3://bucket1/reporting/gzipoutputs4'
) AS
select name , birthdate from "myathena_table";
I am following this link on aws website
The issue is that if i just generate CSV, I see the column names as the first row in output csv. But when I use the above method, I do not see the column names name and birthdate. How can I ensure that I get those as well in the gz
Getting timeout error for a full text query in Athena like this...
SELECT count(textbody) FROM "email"."some_table" where textbody like '% some text to seach%'
Is there any way to optimize it?
Update:
The create table statement:
CREATE EXTERNAL TABLE `email`.`email5_newsletters_04032019`(
`nesletterid` string,
`name` string,
`format` string,
`subject` string,
`textbody` string,
`htmlbody` string,
`createdate` string,
`active` string,
`archive` string,
`ownerid` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'field.delim' = ',',
'ESCAPED BY' = '\\'
) LOCATION 's3://some_bucket/email_backup_updated/email5/'
TBLPROPERTIES ('has_encrypted_data'='false');
And S3 bucket contents:
# aws s3 ls s3://xxx/email_backup_updated/email5/ --human
2020-08-22 15:34:44 2.2 GiB email_newsletters_04032019_updated.csv.gz
There are 11 million records in this file. The file can be imported within 30 minutes in Redshift and everything works OK in redshift. I will prefer to use Athena!
CSV is not a format that integrates very well with the presto engine, as queries need to read the full row to reach a single column. A way to optimize usage of athena, which will also save you plenty of storage costs, is to switch to a columnar storage format, like parquet or orc, and you can actually do it with a query:
CREATE TABLE `email`.`email5_newsletters_04032019_orc`
WITH (
external_location = 's3://my_orc_table/',
format = 'ORC')
AS SELECT *
FROM `email`.`email5_newsletters_04032019`;
Then rerun your query above on the new table:
SELECT count(textbody) FROM "email"."email5_newsletters_04032019_orc" where textbody like '% some text to seach%'
create external table reason ( reason_id int,
retailer_id int,
reason_code string,
reason_text string,
ordering int,
creation_date date,
is_active tinyint,
last_updated_by int,
update_date date
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
"separatorChar" = "\t",
"quoteChar" = "'",
"escapeChar" = "\\"
)
STORED AS TEXTFILE
location 's3://bucket_name/athena-workspace/athena-input/'
TBLPROPERTIES ("skip.header.line.count"="1");
Query above successfuly executes, however, there is no files in the provided location!!!
Upon successful execution table is created and is empty. How is this possible?
Even if I upload file to the provided location, created table is still empty!!
Athena is not a data store, it is simply a serverless tool to read data in S3 using SQL like expressions.
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
This query is creating the metadata of the table, it doesn't write to that location it reads from it.
If you put a CSV into the location and performed select * from reason it would attempt to map any CSV in the prefix of athena-workspace/athena-input/ within bucket bucket_name to your data format using the ROW FORMAT and SERDEPROPERTIES to parse the files. It would also skip the first line assuming its a header.
I ran a simple query using Athena dashboard on data of format csv.The result was a csv with column headers.
When storing the results,Athena stores with the column headers in s3.How can i skip storing header column names,as i have to make new table from the results and it is repetitive
Try "skip.header.line.count"="1", This feature has been available on AWS Athena since 2018-01-19, here's a sample:
CREATE EXTERNAL TABLE IF NOT EXISTS tableName (
`field1` string,
`field2` string,
`field3` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"',
'escapeChar' = '\\'
)
LOCATION 's3://fileLocation/'
TBLPROPERTIES ('skip.header.line.count'='1')
You can refer to this question:
Aws Athena - Create external table skipping first row
From an Eric Hammond post on AWS Forums:
...
WHERE
date NOT LIKE '#%'
...
I found this works! The steps I took:
Run an Athena query, with the output going to Amazon S3
Created a new table pointing to this output based on How do I use the results of my Amazon Athena query in another query?, changing the path to the correct S3 location
Ran a query on the new table with the above WHERE <datefield> NOT LIKE '#%'
However, subsequent queries store even more data in that S3 directory, so it confuses any subsequent executions.