How to load data from CSV into an external table in impala - hdfs

I am following this solution for loading an external table into Impala as I get the same error if I load data by referring to the file.
So, If I run:
[quickstart.cloudera:21000] > create external table Police2 (Priority string,Call_Type string,Jurisdiction string,Dispatch_Area string,Received_Date string,Received_Time int,Dispatch_Time int,Arrival_Time int,Cleared_Time int,Disposition string) row format delimited
> fields terminated by ','
> STORED as TEXTFILE
> location '/user/cloudera/rdpdata/rpd_data_all.csv' ;
I get:
Query: create external table Police2 (Priority string,Call_Type string,Jurisdiction string,Dispatch_Area string,Received_Date string,Received_Time int,Dispatch_Time int,Arrival_Time int,Cleared_Time int,Disposition string) row format delimited
fields terminated by ','
STORED as TEXTFILE
location '/user/cloudera/rdpdata/rpd_data_all.csv'
ERROR: ImpalaRuntimeException: Error making 'createTable' RPC to Hive Metastore:
CAUSED BY: MetaException: hdfs://quickstart.cloudera:8020/user/cloudera/rdpdata/rpd_data_all.csv is not a directory or unable to create one
and If run the below, nothing get imported.
[quickstart.cloudera:21000] > create external table Police2 (Priority string,Call_Type string,Jurisdiction string,Dispatch_Area string,Received_Date string,Received_Time int,Dispatch_Time int,Arrival_Time int,Cleared_Time int,Disposition string) row format delimited
> fields terminated by ','
> location '/user/cloudera/rdpdata' ;
Query: create external table Police2 (Priority string,Call_Type string,Jurisdiction string,Dispatch_Area string,Received_Date string,Received_Time int,Dispatch_Time int,Arrival_Time int,Cleared_Time int,Disposition string) row format delimited
fields terminated by ','
location '/user/cloudera/rdpdata'
Fetched 0 row(s) in 1.01s
and the content of the folder
[cloudera#quickstart ~]$ hadoop fs -ls /user/cloudera/rdpdata
Found 1 items
-rwxrwxrwx 1 cloudera cloudera 75115191 2020-09-02 19:36 /user/cloudera/rdpdata/rpd_data_all.csv
and the content of the file:
[cloudera#quickstart ~]$ hadoop fs -cat /user/cloudera/rdpdata/rpd_data_all.csv
1,EMSP,RP,RC, 03/21/2013,095454,000000,000000,101659,CANC
and the screenshot of the cloudera quickstart vm

The location option in the impala create table statement determines the hdfs_path or the HDFS directory where the data files are stored. Try giving the directory location instead of the file name that should let you use the existing data.
For your reference : https://impala.apache.org/docs/build/html/topics/impala_tables.html

Related

How does the BigQuery command line tool "bq query" stop printing the sql query string for some jobs?

Let's say the following file exists in the current directory:
# example1.sql
SELECT 1 AS data
When I run bq query --use_legacy_sql=false --format=csv < example1.sql I get the expected output:
Waiting on bqjob_r1cc6530a5317399f_0000017f2b966269_1 ... (0s) Current status: DONE
data
1
But if I run the following query
# example2.sql
DECLARE useless_variable_1 STRING DEFAULT 'var_1';
DECLARE useless_variable_2 STRING DEFAULT 'var_2';
SELECT 1 AS data;
like this bq query --use_legacy_sql=false --format=csv < example2.sql I get the following output:
Waiting on bqjob_r501cc26a3d336b7c_0000017f2b97ceb1_1 ... (0s) Current status: DONE
SELECT 1 AS data; -- at [6:1]
data
1
As you can see, adding variables caused bq query to print the contents of "example2.sql" to stdout, which is not what I want. How can I avoid printing the contents of the file to stdout and instead just return the result of the query like in the first example?

Snowflake - getting 'Error parsing JSON' while using the Copy command from S3 to snowflake

i'm trying to copy gz files from my S3 directory to Snowflake.
i created a table in snowflake (notice that the 'extra' field is defined as 'Variant')
CREATE TABLE accesslog
(
loghash VARCHAR(32) NOT NULL,
logdatetime TIMESTAMP,
ip VARCHAR(15),
country VARCHAR(2),
querystring VARCHAR(2000),
version VARCHAR(15),
partner INTEGER,
name VARCHAR(100),
countervalue DOUBLE PRECISION,
username VARCHAR(50),
gamesessionid VARCHAR(36),
gameid INTEGER,
ingameid INTEGER,
machineuid VARCHAR(36),
extra variant,
ingame_window_name VARCHAR(2000),
extension_id VARCHAR(50)
);
i used this copy command in snowflake:
copy INTO accesslog
FROM s3://XXX
pattern='.*cds_201911.*'
CREDENTIALS = (
aws_key_id='XXX',
aws_secret_key='XXX')
FILE_FORMAT=(
error_on_column_count_mismatch=false
FIELD_OPTIONALLY_ENCLOSED_BY = '"'
TYPE = CSV
COMPRESSION = GZIP
FIELD_DELIMITER = '\t'
)
ON_ERROR = CONTINUE
I run it, and got this result (i got many error lines, this is an example to one)
snowflake result
snowflake result -more
a17589e44ae66ffb0a12360beab5ac12 2019-11-01 00:08:39 155.4.208.0 SE 0.136.0 3337 game_process_detected 0 OW_287d4ea0-4892-4814-b2a8-3a5703ae68f3 e9464ba4c9374275991f15e5ed7add13 765 19f030d4-f85f-4b85-9f12-6db9360d7fcc [{"Name":"file","Value":"wowvoiceproxy.exe"},{"Name":"folder","Value":"C:\\Program Files (x86)\\World of Warcraft\\_retail_\\Utils\\WowVoiceProxy.exe"}]
can you please tell me what cause this error?
thanks!
I'm guessing;
The 'Error parsing JSON' is certainly related to the extra variant field.
The JSON looks fine, but there are potential problems with the backslashes \.
If you look at the successfully loaded lines, have the backslashes been removed?
This can (maybe) happen if you have STAGE settings involving escape characters.
The \\Utils substring in the Windows path value can then trigger a Unicode decode error, eg.
Error parsing JSON: hex digit is expected in \U???????? escape sequence, pos 123
UPDATE:
It turns out you have to turn off escape char processing by adding the following to the FILE_FORMAT:
ESCAPE_UNENCLOSED_FIELD = NONE
The alternative is to doublequote fields or to doubly escape backslash, eg. C:\\\\Program Files.

Using AWS Athena to query one line from csv file in s3 to query and export list

I need to select only one line, the last line from many multiple line csv files and add them to a table in aws athena, and then export them to a csv as a whole list.
I am trying to collect data from many sources and the csv files are updated weekly but I only need one line from each file. I have used the standard import to athena and it imports all lines from the selected csv's in the bucket but I need only the last line of each, so that i have the most resent data from that file.
CREATE EXTERNAL TABLE IF NOT EXISTS inventory.laptops (
`date` string,
`serialnum` string,
`biosver` string,
`machine` string,
`manufacturer` string,
`model` string,
`win` string,
`winver` string,
`driveletter` string,
`size` string,
`macaddr` string,
`domain` string,
`ram` string,
`processor` string,
`users` string,
`fullname` string,
`location` string,
`lastconnected` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = ',',
'quoteChar' = '"',
'field.delim' = ','
) LOCATION 's3://my-s3-bucket/'
TBLPROPERTIES ('has_encrypted_data'='false',"skip.header.line.count"="1");
I need the last line from each csv file in the s3 but I get every line using this creation query
Yes, CREATE TABLE defines how to read the file. You will need to craft a SELECT statement to retrieve the desired line. You will need to use some identifier in the file that can indicate the last line, such as having the latest date.
For example, if the last line always has the most recent date, you could use:
SELECT *
FROM inventory.laptops
ORDER BY date
LIMIT 1
If there is no field that can be used to identify the last line, you might need to cheat by finding out the number of lines in the file, then skipping over all but the last line using skip.header.line.count.
Normally, the order of rows in a file is unimportant.
So this is impossible but you can create a lambda function to concatenate the last line of multiple csv files in a bucket directory and print to a single csv and then import it to athena for querying. I used python to solve this.
import logging
import boto3 ,os
import json
logger = logging.getLogger()
logger.setLevel(logging.INFO)
s3 = boto3.client('s3')
def lambda_handler(event, context):
data = ''
# retrieve bucket name and file_key from the S3 event
bucket_name = os.environ['s3_bucket']
# get the object
obj_list = s3.list_objects_v2(Bucket = bucket_name, Prefix = 'bucket prefix')
x = 0
for object in obj_list['Contents']:
obj = s3.get_object(Bucket=bucket_name, Key=object['Key'])
# get lines inside the csv
lines = obj['Body'].read().split(b'\n')
f = 0
for r in lines:
f += 1
#Reads the number of lines in the file
b = 0
for r in lines:
if x < 1:
x +=1
if b == 0:
header = (r.decode())
data +=(header)
b += 1
if b == f-1:
data += (r.decode())
s3.put_object(Bucket=bucket_name, Key='Concat.csv', Body=data)

BadDataError when editing a .dbf file using dbf package

I have recently produced several thousand shapefile outputs and accompanying .dbf files from an atmospheric model (HYSPLIT) on a unix system. The converter txt2dbf is used to convert shapefile attribute tables (text file) to a .dbf.
Unfortunately, something has gone wrong (probably a separator/field length error) because there are 2 problems with the output .dbf files, as follows:
Some fields of the dbf contain data that should not be there. This data has "spilled over" from neighbouring fields.
An additional field has been added that should not be there (it actually comes from a section of the first record of the text file, "1000 201").
This is an example of the first record in the output dbf (retrieved using dbview unix package):
Trajnum : 1001 2
Yyyymmdd : 0111231 2
Time : 300
Level : 0.
1000 201:
Here's what I expected:
Trajnum : 1000
Yyyymmdd : 20111231
Time : 2300
Level : 0.
Separately, I'm looking at how to prevent this from happening again, but ideally I'd like to be able to repair the existing .dbf files. Unfortunately the text files are removed for each model run, so "fixing" the .dbf files is the only option.
My approaches to the above problems are:
Extract the information from the fields that do exist to a new variable using dbf.add_fields and dbf.write (python package dbf), then delete the old incorrect fields using dbf.delete_fields.
Delete the unwanted additional field.
This is what I've tried:
with dbf.Table(db) as db:
db.add_fields("TRAJNUMc C(4)") #create new fields
db.add_fields("YYYYMMDDc C(8)")
db.add_fields("TIMEc C(4)")
for record in db: #extract data from fields
dbf.write(TRAJNUMc=int(str(record.Trajnum)[:4]))
dbf.write(YYYYMMDDc=int(str(record.Trajnum)[-1:] + str(record.Yyyymmdd)[:7]))
dbf.write(TIMEc=record.Yyyymmdd[-1:] + record.Time[:])
db.delete_fields('Trajnum') # delete the incorrect fields
db.delete_fields('Yyyymmdd')
db.delete_fields('Time')
db.delete_fields('1000 201') #delete the unwanted field
db.pack()
But this produces the following error:
dbf.ver_2.BadDataError: record data is not the correct length (should be 31, not 30)
Given the apparent problem that there has been with the txt2dbf conversion, I'm not surprised to find an error in the record data length. However, does this mean that the file is completely corrupted and that I can't extract the information that I need (frustrating because I can see that it exists)?
EDIT:
Rather than attempting to edit the 'bad' .dbf files, it seems a better approach to 1. extract the required data to a text from the bad files and then 2. write to a new dbf. (See Ethan Furman's comments/answer below).
EDIT:
An example of a faulty .dbf file that I need to fix/recover data from can be found here:
https://www.dropbox.com/s/9y92f7m88a8g5y4/p0001120110.dbf?dl=0
An example .txt file from which the faulty dbf files were created can be found here:
https://www.dropbox.com/s/d0f2c0zehsyy8ab/attTEST.txt?dl=0
To fix the data and recreate the original text file, this snippet should help:
import dbf
table = dbf.Table('/path/to/scramble/table.dbf')
with table:
fixed_data = []
for record in table:
# convert to str/bytes while skipping delete flag
data = record._data[1:].tostring()
trajnum = data[:4]
ymd = data[4:12]
time = data [12:16]
level = data[16:].strip()
fixed_data.extend([trajnum, ymd, time, level])
new_file = open('repaired_data.txt', 'w')
for line in fixed_data:
new_file.write(','.join(line) + '\n')
Assuming all your data files look like your sample (the big IF being the data has no embedded commas), then this rough code should help translate your text files into dbfs:
raw_data = open('some_text_file.txt').read().split('\n')
final_table = dbf.Table(
'dest_table.dbf',
'trajnum C(4); yyyymmdd C(8); time C(4); level C(9)',
)
with final_table:
for line in raw_data:
fields = line.split(',')
final_table.append(tuple(fields))
# table has been populated and closed
Of course, you could get fancier and use actual date, and number fields if you want to:
# dbf string becomes
'trajnum N; yyyymmdd D; time C(4), level N'
#appending data loop becomes
for line in raw_data:
trajnum, ymd, time, level = line.split(',')
trajnum = int(trajnum)
ymd = dbf.Date(ymd[:4], ymd[4:6], ymd[6:])
level = int(level)
final_table.append((trajnum, ymd, time, level))

How to get this output from pig Latin in MapReduce

I want to get the following output from Pig Latin / Hadoop
((39,50,60,42,15,Bachelor,Male),5)
((40,35,HS-grad,Male),2)
((39,45,15,30,12,7,HS-grad,Female),6)
from the following data sample
data sample for adult data
I have written the following Pig Latin script:
sensitive = LOAD '/mdsba/sample2.csv' using PigStorage(',') as (AGE,EDU,SEX,SALARY);
BV= group sensitive by (EDU,SEX) ;
BVA= foreach BV generate group as EDU, COUNT (sensitive) as dd:long;
Dump BVA ;
Unfortunately, the results come out like this
((Bachelor,Male),5)
((HS-grad,Male),2)
Than try to project the AGE data too.
Something like this:
BVA= foreach BV generate
sensitive.AGE as AGE,
FLATTEN(group) as (EDU,SEX),
COUNT(sensitive) as dd:long;
Another suggestion is to specify the datatype when you load the data.
sensitive = LOAD '/mdsba/sample2.csv' using PigStorage(',') as (AGE:int,EDU:chararray,SEX:chararray,SALARY:chararray);