I'm trying to create an external table in Athena, the problem is that the s3 bucket has different files in the same folder so I can't use the folder as a location.
I can't modify the path of the s3 files but I have a CSV manifest, I was trying to use it as a location but Athena didn't allow me to do that.
CREATE EXTERNAL TABLE `my_DB`.`my_external_table`(
column1 string,
column2 string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"',
'escapeChar' = '\\'
)
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://mys3bucket/tables/my_table.csvmanifest'
TBLPROPERTIES (
'has_encrypted_data'='false',
'skip.header.line.count'='1')
Any ideas to use my manifest? or another way to solve this without Athena? The goal of using Athena was to avoid getting all the data from the CSVs since I only need few records
You'll need to make a couple changes to your CREATE TABLE statement:
use 'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat' as your INPUTFORMAT
Ensure you're pointing to a folder with your LOCATION statement
So your statement would look like:
CREATE EXTERNAL TABLE `my_DB`.`my_external_table`(
column1 string,
column2 string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"',
'escapeChar' = '\\'
)
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.SymlinkTextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://mys3bucket/tables/my_table/'
And s3://mys3bucket/tables/my_table/ would have a single file in it with the S3 paths of the CSV files you want query - one path per line. I'm unsure if the skip.header.line.count setting will operate on the manifest file itself or the CSV files so you will have to test.
Alternatively, if you have a limited number of files, you could use S3 Select to query for specific columns in those files, one at a time. Using the AWS CLI, the command to extract the 2nd column would look something like:
aws s3api select-object-content \
--bucket mys3bucket \
--key path/to/your.csv.gz \
--expression "select _2 from s3object limit 100" \
--expression-type SQL \
--input-serialization '{"CSV": {}, "CompressionType": "GZIP"}' \
--output-serialization '{"CSV":{}}' \
sample.csv
(Disclaimer: AWS employee)
Related
I'm creating a table in Athena based on a list of CSV files in an S3 bucket. The files in the buckets are placed in folders like this:
$ aws s3 ls s3://bucket-name/ --recursive
2023-01-23 16:05:01 25601 logs2023/01/23/23/analytics_Log-1-2023-01-23-23-59-59-6dc5bd4c-f00f-4f34-9292-7bfa9ec33c55
2023-01-23 16:10:03 18182 logs2023/01/24/00/analytics_Log-1-2023-01-24-00-05-01-aa2cb565-05c8-43e2-a203-96324f66a5a7
2023-01-23 16:15:05 20350 logs2023/01/24/00/analytics_Log-1-2023-01-24-00-10-03-87b03989-c059-4fca-8e8b-909e787db889
2023-01-23 16:20:09 25187 logs2023/01/24/00/analytics_Log-1-2023-01-24-00-15-06-6d9b39fb-c05f-4416-9b17-415f48e63591
2023-01-23 16:25:18 20590 logs2023/01/24/00/analytics_Log-1-2023-01-24-00-20-16-3939a0fe-8cfb-4168-bc8e-e71d2122add5
This is the format for the folder structure:
logs{year}/{month}/{day}/{hour}/<filename>
I would like to use Athena's partition projection and this is how I'm creating my table:
CREATE EXTERNAL TABLE analytics.logs (
id string,
...
type tinyint)
PARTITIONED BY (
year bigint COMMENT '',
month string COMMENT '',
day string COMMENT '')
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://bucket-name/'
TBLPROPERTIES (
'classification'='csv',
'partition.day.values'='01,02,03,04,05,06,07,08,09,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31',
'partition.day.type'='enum',
'partition.enable'='true',
'partition.month.values'='01,02,03,04,05,06,07,08,09,10,11,12',
'partition.month.type'='enum',
'partition.year.range'='2022,2100',
'partition.year.type'='integer',
'storage.location.template'='s3://bucket-name/logs${year}/${month}/${day}/')
As you can see, I'm trying to partition the data using year, month, and day. Even though there's also an hour folder, I'm not interested in that. This command executes just fine and it creates the table too. But when I query the table:
SELECT * FROM analytics.logs LIMIT 10;
It returns empty. But if I create the same table without the PARTITIONED part, I can see the records. Can someone please help me understand what I'm doing wrong?
[UPDATE]
I simplified the folder structure to see if it works. It does not.
$ aws s3 ls s3://bucket-name/test --recursive
2023-01-24 07:03:30 0 test/
2023-01-24 07:03:59 0 test/2022/
2023-01-24 07:11:06 13889 test/2022/Log-1-2022-12-01-00-00-11-255f8d74-5417-42a0-8c09-97282a626903
2023-01-24 07:11:05 8208 test/2022/Log-1-2022-12-01-00-05-15-c34eda24-36d8-484c-b7b6-4861c297d857
CREATE EXTERNAL TABLE `log_2`(
`id` string,
...
`type` tinyint)
PARTITIONED BY (
`year` bigint COMMENT '')
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://bucket-name/test'
TBLPROPERTIES (
'classification'='csv',
'partition.enable'='true',
'partition.year.range'='2021,2023',
'partition.year.type'='integer',
'storage.location.template'='s3://bucket-name/test/${year}/')
And still the following query returns nothing:
SELECT * FROM "analytics"."log_2" where year = 2022 limit 10;
You have a mismatch in data types. Partition by year is bigint and partition projection is integer. Make both integers.
"projection.enabled" = "true",
"projection.datehour.type" = "date",
"projection.datehour.format" = "yyyy/MM/dd/HH",
"projection.datehour.range" = "2021/01/01/00,NOW",
"projection.datehour.interval" = "1",
"projection.datehour.interval.unit" = "HOURS",
Change the partition word to projection.
For anyone else who might make my mistake, the problem is that I was (incorrectly) using partition in the TBLPROPERTIES section. While it should have been projection.
To provide you with a working example:
CREATE EXTERNAL TABLE `log_2`(
id string,
...
type tinyint)
PARTITIONED BY (
`year` bigint COMMENT '')
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://bucket-name/test'
TBLPROPERTIES (
'classification'='csv',
'projection.enable'='true',
'projection.year.range'='2021,2023',
'projection.year.type'='integer',
'storage.location.template'='s3://bucket-name/test/${year}/')
I am trying to load a files from s3 to Athena to perform a query operation. But all the column values are getting added to the first column.
I have file in the following format:
id,user_id,personal_id,created_at,updated_at,active
34,34,43,31:28.4,27:07.9,TRUE
This is the output I get:
Table creation query:
CREATE EXTERNAL TABLE `testing`(
`id` string,
`user_id` string,
`personal_id` string,
`created_at` string,
`updated_at` string,
`active` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://testing2fa/'
TBLPROPERTIES (
'transient_lastDdlTime'='1665356861')
Please can someone tell me where am I going wrong?
You should add skip.header.line.count to your table properties to skip the first row. As you have defined all columns as string data type Athena was unable to differentiate between header and first row.
DDL with property added:
CREATE EXTERNAL TABLE `testing`(
`id` string,
`user_id` string,
`personal_id` string,
`created_at` string,
`updated_at` string,
`active` string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
's3://testing2fa/'
TBLPROPERTIES ('skip.header.line.count'='1')
The Serde needs some parameter to recognize CSV files, such as:
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
ESCAPED BY '\\'
LINES TERMINATED BY '\n'
See: LazySimpleSerDe for CSV, TSV, and custom-delimited files - Amazon Athena
An alternative method is to use AWS Glue to create the tables for you. In the AWS Glue console, you can create a Crawler and point it to your data. When you run the crawler, it will automatically create a table definition in Amazon Athena that matches the supplied data files.
Getting timeout error for a full text query in Athena like this...
SELECT count(textbody) FROM "email"."some_table" where textbody like '% some text to seach%'
Is there any way to optimize it?
Update:
The create table statement:
CREATE EXTERNAL TABLE `email`.`email5_newsletters_04032019`(
`nesletterid` string,
`name` string,
`format` string,
`subject` string,
`textbody` string,
`htmlbody` string,
`createdate` string,
`active` string,
`archive` string,
`ownerid` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'field.delim' = ',',
'ESCAPED BY' = '\\'
) LOCATION 's3://some_bucket/email_backup_updated/email5/'
TBLPROPERTIES ('has_encrypted_data'='false');
And S3 bucket contents:
# aws s3 ls s3://xxx/email_backup_updated/email5/ --human
2020-08-22 15:34:44 2.2 GiB email_newsletters_04032019_updated.csv.gz
There are 11 million records in this file. The file can be imported within 30 minutes in Redshift and everything works OK in redshift. I will prefer to use Athena!
CSV is not a format that integrates very well with the presto engine, as queries need to read the full row to reach a single column. A way to optimize usage of athena, which will also save you plenty of storage costs, is to switch to a columnar storage format, like parquet or orc, and you can actually do it with a query:
CREATE TABLE `email`.`email5_newsletters_04032019_orc`
WITH (
external_location = 's3://my_orc_table/',
format = 'ORC')
AS SELECT *
FROM `email`.`email5_newsletters_04032019`;
Then rerun your query above on the new table:
SELECT count(textbody) FROM "email"."email5_newsletters_04032019_orc" where textbody like '% some text to seach%'
Hi Currently I have created a table schema in AWS Athena as follow
CREATE EXTERNAL TABLE IF NOT EXISTS axlargetable.AEGIntJnlActivityLogStaging (
`clientcomputername` string,
`intjnltblrecid` bigint,
`processingstate` string,
`sessionid` int,
`sessionlogindatetime` string,
`sessionlogindatetimetzid` bigint,
`recidoriginal` bigint,
`modifieddatetime` string,
`modifiedby` string,
`createddatetime` string,
`createdby` string,
`dataareaid` string,
`recversion` int,
`partition` bigint,
`recid` bigint
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"',
'escapeChar' = '\\'
)
LOCATION 's3://ax-large-table/AEGIntJnlActivityLogStaging/'
TBLPROPERTIES ('has_encrypted_data'='false');
But one of the filed (processingstate) value contain comma as "Europe, Middle East, & Africa" which displace columns order.
So what would be the best way to read this file. Thanks
When I removed this part
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"',
'escapeChar' = '\\'
)
I was able to read quoted text with commas in it
As workaround - look at aws glue project.
Instead of creating table via CREATE EXTERNAL TABLE:
invoke get-table for your table
Then make json for create-table
Merge the following StorageDescriptor part:
{
"StorageDescriptor": {
"SerdeInfo": {
"SerializationLibrary": "org.apache.hadoop.hive.serde2.OpenCSVSerde"
...
}
...
}
perform create via aws cli. You will get this table in aws glue and athena be able to select correct columns.
Notes
If your table already defined OpenCSVSerde - they may be fixed this issue and you can simple recreate this table.
I do not have much knoledge about athena, but in aws glue you can delete or create table without any data loss
Before adding this table via create-table you have to check first how glue or/and athena hadles table duplicates
This is a common messy CSV file situation where certain values contain commas. The solution in Athena for this is to use SERDEPROPERTIES as described in the AWS doc https://docs.aws.amazon.com/athena/latest/ug/csv-serde.html [the url may change so just search for 'OpenCSVSerDe for Processing']
Following is a basic create table example provided. Based on your data you would have to ensure that the data type is specified correctly (eg string)
CREATE EXTERNAL TABLE test1 (
f1 string,
s2 string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES ("separatorChar" = ",", "escapeChar" = "\")
LOCATION 's3://user-test-region/dataset/test1/'
My bucket used to have this structure:
mybucket/raw/i1.json
mybucket/raw/i2.json
It was easy and straightfoward to use Amazon Athena using the code below to create
the table.
CREATE EXTERNAL TABLE IF NOT EXISTS myclients.big_clients (
`id_number` string,
`txt` string,
...
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1',
) LOCATION 's3://mybucket/raw/'
TBLPROPERTIES ('has_encrypted_data'='false');
Now I'm facing some problems with a migration in the bucket structure.
The new structure in the bucket is showed below.
mybucket/raw/1/i1.json
mybucket/raw/1/docs/doc_1.json
mybucket/raw/1/docs/doc_2.json
mybucket/raw/1/docs/doc_3.json
mybucket/raw/2/i2.json
mybucket/raw/2/docs/doc_1.json
mybucket/raw/2/docs/doc_2.json
I wish I could create now two tables (the same table I had before the migration and a new one only with the docs.)
Is there any way I could do that without having to rearrange my files in another folder?
I'm searching for some kind of wildcard for the bucket files on the creation of the table.
CREATE EXTERNAL TABLE IF NOT EXISTS myclients.big_clients (
`id_number` string,
`txt` string,
...
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1',
'input.regex' = 'i*.json'
) LOCATION 's3://mybucket/raw/'
TBLPROPERTIES ('has_encrypted_data'='false');
CREATE EXTERNAL TABLE IF NOT EXISTS myclients.big_clients_docs (
`dt` date,
`txt` string,
`id_number` string,
`s3_doc_path` string,
`s3_doc_path_origin` string
)
ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1',
'input.regex' = 'doc_*.json'
) LOCATION 's3://mybucket/raw/'
TBLPROPERTIES ('has_encrypted_data'='false');
I was looking for the same thing. Unfortunately this is not possible due to the s3 api not being that wildcard friendly (requires scanning all the keys client side, which is slow). The documentation for athena also states that this is not supported.