I have a table like so
CREATE EXTERNAL TABLE IF NOT EXISTS something (
...
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"',
'escapeChar' = '\\'
)
LOCATION 's3://...'
TBLPROPERTIES ('has_encrypted_data'='false');
but some fields contain a comma like (8-10,99) without quotes. the csv is too large to be opened on excel. is there any way to change the delimiter or make athena read this file?
If the fields are comma-separated, but contain commas without escaping there is no way for any automated tool to distinguish between a comma that represents a separator between fields and one that is meant to be content. In other words, the files are malformed and have to be fixed. If you have the option of generating the files again either make sure that fields are quoted, or use a separator that will not appear in the fields, such as a tab character.
Related
I have a table in athena that the data comes from a file that has the separator (";") and I'm trying to configure this in the table but it doesn't recognize the complete separator. How do I make it recognize the double quotes?
PARTITIONED BY (
`dt` date)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\"\;\"'
STORED AS INPUTFORMAT
I tried several ways in the code and it doesn't understand the double quotes
FIELDS TERMINATED BY '"\;"'
FIELDS TERMINATED BY '\";\"'
FIELDS TERMINATED BY '";"'
I have the following query to create a table in Athena out of existing files located in S3. As we can see, I am defining the linebreak character and how to manage null values:
CREATE EXTERNAL TABLE IF NOT EXISTS table_name(
`field1` STRING,
`field2` STRING
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n'
NULL DEFINED AS ' '
LOCATION 's3://bucket/prefix/'
TBLPROPERTIES ('skip.header.line.count'='1')
Now I also want to include the quotation character, but I don't see any property for that.
I tried using WITH SERDEPROPERTIES properties as shown below (where I can use quoteChar), but then I cannot find any SERDE property to define the "linebreak" and the "NULL management".
CREATE EXTERNAL TABLE IF NOT EXISTS table_name(
`field1` STRING,
`field2` STRING
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES ('separatorChar' = ',', 'quoteChar' = '"')
LOCATION 's3://bucket/prefix/'
TBLPROPERTIES ('skip.header.line.count'='1')
Is there any way of using quotation character, field delimiter, linebreak, and null management together?
This athena table correctly reads the first line of the file.
CREATE EXTERNAL TABLE `test_delete_email5`(
`col1` string,
`col2` string,
`col3` string,
`col4` string,
`col5` string,
`col6` string,
`col7` string,
`col8` string,
`col9` string,
`col10` string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'field.delim' = ',',
'LINES TERMINATED BY' = '\n',
'ESCAPED BY' = '\\',
'quoteChar' = '\"'
) LOCATION 's3://testme162/email_backup/email5/'
TBLPROPERTIES ('has_encrypted_data'='false')
This table is not imported correctly due to html code found in 5th column. Is there any other way?
It appears that your file contains a lot of multi-line text in the textbody field. This does not the CSV standard (or at least, it cannot be understood by the OpenCSVSerde).
As a test, I made a simple file:
"newsletterid","name","format","subject","textbody","htmlbody","createdate","active","archive","ownerid"
"one","two","three","four","five","six","seven","eight","nine","ten"
"one","two","three","four","five \" quote \" five2","six","seven","eight","nine","ten"
"one","two","three","four","five \
five2","six","seven","eight","nine","ten"
Row 1 is the header
Row 2 is normal
Row 3 has a field with \" escaped quotes
Row 4 has escaped newlines
I then ran the command from your question and pointed it to this data file.
Result:
Rows 1-3 (including the header row) were returned
Row 4 only worked until the \ -- data after that was lost
Bottom line: Your file format is not compatible with CSV format.
You might be able to find some Serde that can handle it, but OpenCSVSerde doesn't seem to understand it because rows are normally split by newlines.
Hi I have created a table in Athena with following query which will read csv file form S3.
CREATE EXTERNAL TABLE IF NOT EXISTS axlargetable.AEGIntJnlTblStaging (
`filename` string,
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"'
)
LOCATION 's3://ax-large-table/AEGIntJnlTblStaging/'
TBLPROPERTIES ('has_encrypted_data'='false');
But value in filename filed like "\\emdc1fas\HR_UK\ADPFreedom_Employee_20141114_11.04.00.csv"
When I read this table my values appears like
"\emdc1fasHR_UKADPFreedom_Employee_20141114_11.04.00.csv"
where I missing all the escape character (backslash) from the value.
How can I read the value which will show me the actual value with escape character.
Thanks
As long as you don't need the escaping, you can set the escape character to something unrelated (for example "|").
CREATE EXTERNAL TABLE IF NOT EXISTS axlargetable.AEGIntJnlTblStaging (
filename string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"',
'escapeChar' = '|'
)
LOCATION 's3://ax-large-table/AEGIntJnlTblStaging/'
TBLPROPERTIES ('has_encrypted_data'='false');
I am trying to use EMR/Hive to import data from S3 into DynamoDB. My CSV file has fields which are enclosed within double quotes and separated by comma.
While creating external table in hive, I am able to specify delimiter as comma but how do I specify that fields are enclosed within quotes?
If I don’t specify, I see that values in DynamoDB are populated within two double quotes ““value”” which seems to be wrong.
I am using following command to create external table. Is there a way to specify that fields are enclosed within double quotes?
CREATE EXTERNAL TABLE emrS3_import_1(col1 string, col2 string, col3 string, col4 string) ROW FORMAT DELIMITED FIELDS TERMINATED BY '","' LOCATION 's3://emrTest/folder';
Any suggestions would be appreciated.
Thanks
Jitendra
I was also stuck with the same issue as my fields are enclosed with double quotes and separated by semicolon(;). My table name is employee1.
So I have searched with links and I have found perfect solution for this.
We have to use serde for this. Please download serde jar using this link : https://github.com/downloads/IllyaYalovyy/csv-serde/csv-serde-0.9.1.jar
then follow below steps using hive prompt :
add jar path/to/csv-serde.jar;
create table employee1(id string, name string, addr string)
row format serde 'com.bizo.hive.serde.csv.CSVSerde'
with serdeproperties(
"separatorChar" = "\;",
"quoteChar" = "\"")
stored as textfile
;
and then load data from your given path using below query:
load data local inpath 'path/xyz.csv' into table employee1;
and then run :
select * from employee1;
Now you will see the magic. Thanks.
Following code solved same type of problem
CREATE TABLE TableRowCSV2(
CODE STRING,
PRODUCTCODE STRING,
PRICE STRING
)
COMMENT 'row data csv'
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
"separatorChar" = "\,",
"quoteChar" = "\""
)
STORED AS TEXTFILE
tblproperties("skip.header.line.count"="1");
If you're stuck with the CSV file format, you'll have to use a custom SerDe; and here's some work based on the opencsv libarary.
But, if you can modify the source files, you can either select a new delimiter so that the quoted fields aren't necessary (good luck), or rewrite to escape any embedded commas with a single escape character, e.g. '\', which can be specified within the ROW FORMAT with ESCAPED BY:
CREATE EXTERNAL TABLE emrS3_import_1(col1 string, col2 string, col3 string, col4 string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ESCAPED BY '\\' LOCATION 's3://emrTest/folder';
Hive now includes an OpenCSVSerde which will properly parse those quoted fields without adding additional jars or error prone and slow regex.
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
Hive doesn't support quoted strings right out of the box. There are two approaches to solving this:
Use a different field separator (e.g. a pipe).
Write a custom InputFormat based on OpenCSV.
The faster (and arguably more sane) approach is to modify your initial the export process to use a different delimiter so you can avoid quoted strings. This way you can tell Hive to use an external table with a tab or pipe delimiter:
CREATE TABLE foo (
col1 INT,
col2 STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|';
Use the csv-serde-0.9.1.jar file in your hive query, see
http://illyayalovyy.github.io/csv-serde/
add jar /path/to/jar_file
Create external table emrS3_import_1(col1 string, col2 string, col3 string, col4 string) row format serde 'com.bizo.hive.serde.csv.CSVSerde'
with serdeproperties
(
"separatorChar" = "\;",
"quoteChar" = "\"
) stored as textfile
tblproperties("skip.header.line.count"="1") ---to skip if have any header file
LOCATION 's3://emrTest/folder';
There can be multiple solutions to this problem.
Write custom SerDe class
Use RegexSerde
Remove escaped delimiter chars from data
Read more at http://grokbase.com/t/hive/user/117t2c6zhe/urgent-hive-not-respecting-escaped-delimiter-characters