why crawler create duplicate column name with #? - amazon-web-services

When i try to crawl the data from datalake in s3 bucket.Some duplicate column are created with # at prefix of the column ex: price#22 in my data catalog.

Related

Athena insert data into new added column

Trying to insert data into a new column I added. Athena does not have an update table command. Is there anyway to do this without reloading the whole table?
I created a test table and then added the column doing this:
ALTER TABLE MikeTest ADD COLUMNS (monthNum int);
I want to update the column with this SQL statement:
month(date_parse("date", '%m/%d/%Y'))
Amazon Athena reads its data from Amazon S3. It is not possible to 'update' a table because this would require re-writing the files in S3.
You could create a new table with the additional column:
CREATE TABLE new_table
WITH (
external_location = 's3://my_athena_results/folder/',
format = 'Parquet',
write_compression = 'SNAPPY'
)
AS
SELECT
*,
month(date_parse("date", '%m/%d/%Y')) as month
from old_table
This will copy the data to a new location in S3, while populating the new column

How to fetch all the values of one column using column name in dynamoDB table in java?

I have a dynamoDB table in my aws account. I can create client like this.
AmazonDynamoDB amazonDynamoDB =
AmazonDynamoDBClient.builder().withRegion("eu-west-1").withCredentials(creds).build();
DynamoDB dynamoDB = new DynamoDB(amazonDynamoDB);
Table table = dynamoDB.getTable("table name");
Suppose there is column name "content". I want to get a list or set of all values in "content" column.

AWS Glue Dynamic_frame with pushdown predicate not filtering correctly

I am writing an script for AWS Glue that is sourced in S3 stored parquet files, in which I am creating a DynamicFrame and attempting to use pushDownPredicate logic to restrict the data coming in.
The table partitions are (in order): account_id > region > vpc_id > dt
And the code for creating the dynamic_frame is the following:
dynamic_frame = glueContext.create_dynamic_frame.from_catalog(
database = DATABASE_NAME,
table_name= TABLE_NAME,
push_down_predicate = "dt='" + DATE + "'")
where DATE = '2019-10-29'
However it seems that Glue still attempts to read data from other days. Maybe it's because I have to specify a push_down_predicate for the other criteria?
As per the comments, the logs show that the date partition column is marked as "dt" where as in your table it is being referred by the name "date"
Logs
s3://route/account_id=XXX/region=eu-west-1/vpc_id=YYY/dt=2019-07-15
s3://route/account_id=XXX/region=eu-west-1/vpc_id=YYY//dt=2019-10-03
s3://route/account_id=XXX/region=eu-west-1/vpc_id=YYY//dt=2019-08-27
s3://route/account_id=XXX/region=eu-west-1/vpc_id=YYY//dt=2019-10-29 ...
Your Code
dynamic_frame = glueContext.create_dynamic_frame.from_catalog(
database = DATABASE_NAME,
table_name= TABLE_NAME,
push_down_predicate = "date='" + DATE + "'")
Change the date partitions column name to dt in your table and same in push_down_predicate parameter in the above code.
I also see extra forward slashes in some of the paths in above logs, were these partitions added manually through athena using ALTER TABLE command? If so, I would recommend to use MSCK REPAIR command to load all partitions in the table to avoid such issues. Extra blank slashes in S3 path some times lead to errors while doing ETL through spark.

How to fetch string from a JSON array stored in Amazon Redshift column?

I have a database stored in Amazon Redshift and an array is stored in table column in JSON format.
How to fetch a string from array?
Using json_extract_path_text you can retrieve values from a column
In Redshift database I have JSON in one column
This query performs join and to get seperate column results.
SELECT json_extract_path_text(O._doc,'domain') AS Domain,
json_extract_path_text(P._doc,'email') AS Email
FROM intelligense_mongo.organisations AS O
INNER JOIN intelligense_mongo.people AS P
ON json_extract_path_text(O._doc,'_id') =
json_extract_path_text(P._doc,'organisation_id')
Where
json_extract_path_text(O._doc,'tools_name') = '%"WordPress"%'
Use the JSON_EXTRACT_PATH_TEXT Function:
select json_extract_path_text('{"f2":{"f3":1},"f4":{"f5":99,"f6":"star"}}','f4', 'f6');
json_extract_path_text
----------------------
star

Amazon Athena : How to store results after querying with skipping column headers?

I ran a simple query using Athena dashboard on data of format csv.The result was a csv with column headers.
When storing the results,Athena stores with the column headers in s3.How can i skip storing header column names,as i have to make new table from the results and it is repetitive
Try "skip.header.line.count"="1", This feature has been available on AWS Athena since 2018-01-19, here's a sample:
CREATE EXTERNAL TABLE IF NOT EXISTS tableName (
`field1` string,
`field2` string,
`field3` string
)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
WITH SERDEPROPERTIES (
'separatorChar' = ',',
'quoteChar' = '\"',
'escapeChar' = '\\'
)
LOCATION 's3://fileLocation/'
TBLPROPERTIES ('skip.header.line.count'='1')
You can refer to this question:
Aws Athena - Create external table skipping first row
From an Eric Hammond post on AWS Forums:
...
WHERE
date NOT LIKE '#%'
...
I found this works! The steps I took:
Run an Athena query, with the output going to Amazon S3
Created a new table pointing to this output based on How do I use the results of my Amazon Athena query in another query?, changing the path to the correct S3 location
Ran a query on the new table with the above WHERE <datefield> NOT LIKE '#%'
However, subsequent queries store even more data in that S3 directory, so it confuses any subsequent executions.