I want to use Regex to find an S3 directory path in AWS Data Pipeline.
This is for an S3 Data Node. And then I will do a Redshift Copy from S3 to a Redshift table.
Example S3 path: S3://foldername/hh=10
Can you we use Regex to find hh=##, where ## could be any number from 0-24.
The goal is to copy all the files in folders where the name is hh=1, hh=2, hh=3, etc. (hh is hour)
Here's a bit of regex that will capture the last 1 or 2 digits after 'hh=', at the end of the line.
/hh=(\d{1,2})$/
Related
I have multiple S3 files in a bucket.
Input S3 bucket :
File1 - 2GB data
File 2 - 500MB data
File 3 - 1Gb Data
file 4 - 2GB data
and so on. Assume there are 50 such files. Data within files is of same schema, lets say attribute1, attribute 2.
I want to merge these files and output into a new bucket as follows, such that each file is less than 1GB in same schema as before.
Files 1 - < 1GB
Files 2 - < 1GB
Files 3 - < 1GB
I am looking for AWS based solutions which I can deliver using AWS CDK. I was considering following two solutions :
AWS Athena - reads and writes to S3 but not sure if I can set up a 1GB limit while writing.
AWS Lambda - read file sequentially, store in memory, when size is near 1GB, write to new file in s3 bucket. Repeat until all files completed. I'm worried about the 15 min timeout, not sure if lambda will be able to process.
Expected scales -> Overall file input size sum : 1 TB
What would be a good way to go about implementing this? Hope I have phrased the question right, I'd be happy to comment if any doubts.
Thanks!
Edit :
Based on a comment ->
Apologies for calling it a merge. More of a reset. All files have the same schema, placed in csv files. In terms of pseudo code
List<Files> listOfFiles = ReadFromS3(key)
New file named temp.csv
for each file : listOfFiles :
append file to temp.csv
List<1GBGiles> finalList = Break down temp.csv into sets of 1GB each
for(File file : finalList)
writeToS3(finalList)
Amazon Athena can run a query across multiple objects in a given Amazon S3 path, as long as they all have the same format (eg same columns in a CSV file).
It can store the result in a new External Table, with a location pointing to an S3 bucket, by using a CREATE TABLE AS command and a LOCATION parameter.
The size of the output files can be controlled by setting the number of output buckets (which is not the same as an S3 bucket).
See:
Bucketing vs partitioning - Amazon Athena
Set the number or size of files for a CTAS query in Amazon Athena
If your process includes ETL(Extraction Transformation Load) post process, you could use AWS GLUE
Please find here an example for Glue using s3 as a source.
If you’d like to use it with Java SDK, the best starting points are:
the Glue GitHub repo
The aws Java code sample catalog for Glue
Out of all of them your the Tutorial to create a crawler (that you can find in GitHub as per above url) should match your case as it crawls an S3 bucket and put it in a glue catalog for transformation.
I'm trying to create a glue crawler to crawl a specific path pattern. I have the following paths:
bucket/inference/2022/04/28/modelling/metadata.tar.gz
bucket/inference/2022/04/28/prediction/predictions.parquet
bucket/inference/2022/04/28/extract/data.parquet
The same pattern is repeated every day, i.e. we have the above for
bucket/inference/2022/04/29/*
bucket/inference/2022/04/30/*
I only want to crawl what's in the **/predictions folders each day. I've set up a glue crawler pointing to bucket/inference/, and have the following exclude patterns:
**/modelling/**
**/extract/**
The logs correctly show that the bucket/inference/2022/04/28/modelling/metadata.tar.gz and bucket/inference/2022/04/28/extract/data.parquet files are being excluded, and the DDL metadata shows that it's picking up the correct number of objects and rows in the data.
However, when I go to SELECT * in Athena, I get the following error:
HIVE_BAD_DATA: Not valid Parquet file: s3://bucket/inference/2022/04/28/modelling/metadata.tar.gz expected magic number: PAR1
I've tried every combo of the above exclude patterns, but it always seems to be picking up what's in the modelling folder, despite the logs explicitly excluding it. Am I missing something here?
Many thanks.
This is a known issue with Athena. From AWS troubleshooting documentation:
Athena does not recognize exclude patterns that you specify an AWS Glue crawler. For example, if you have an Amazon S3 bucket that contains both .csv and .json files and you exclude the .json files from the crawler, Athena queries both groups of files. To avoid this, place the files that you want to exclude in a different location.
Reference: Athena reads files that I excluded from the AWS Glue crawler (AWS)
I am reading data from S3 bucket using Athena and the data from following file is correct.
# aws s3 ls --human s3://some_bucket/email_backup/email1/
2020-08-17 07:00:12 0 Bytes
2020-08-17 07:01:29 5.0 GiB email_logs_old1.csv.gz
When I change the path to _updated as shown below, I get an error.
# aws s3 ls --human s3://some_bucket/email_backup_updated/email1/
2020-08-22 12:01:36 5.0 GiB email_logs_old1.csv.gz
2020-08-22 11:41:18 5.0 GiB
This is because of the extra file without name in the same location. I have no idea how I managed to upload a file without a name. I will like to know how to repeat it (so that I can avoid it)
All S3 files have a name (in fact the full path is actually the object key, which is the metadata to define your object name).
If you see a blank named file in the path of s3://some_bucket/email_backup_updated/email1/ you have likely created a file named s3://some_bucket/email_backup_updated/email1/.
As I mentioned earlier S3 objects use key, for this reason the file hierarchy does not exist. You simply are filtering by prefix instead.
You should be able to validate this by performing the following without the trailing slash aws s3 ls --human s3://some_bucket/email_backup_updated/email1.
If you add an extra non-breaking space at the end of the destination path, the file will be copied to S3 but with a blank name. for e.g.
aws s3 cp t.txt s3://some_bucket_123/email_backup_updated/email1/
(Note the non-breaking space after email1/ )
\xa0 is actually non-breaking space in Latin1, also chr(160). The non breaking space itself is the name of the file!
Using the same logic, I can remove the "space" file by adding the non-breaking space at the end.
aws s3 rm s3://some_bucket_123/email_backup_updated/email1/
I can also login to console and remove it from User Interface.
I have about 2M+ records across ~600 CSV files in a single bucket all at the root level - not in any subfolders. The files all start with a unique ID number of 3-6 digits. If I do the following command:
LOAD DATA FROM S3 PREFIX 's3://my-bucket/'
IGNORE INTO TABLE `my_table`
FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"'
IGNORE 1 LINES;
Only about 500k records are loaded into the table. But if I do a sequence of commands starting with 1-9 then eventually I get the expected row count of data loaded into the table.
LOAD DATA FROM S3 PREFIX 's3://my-bucket/1'
...
LOAD DATA FROM S3 PREFIX 's3://my-bucket/2'
...
LOAD DATA FROM S3 PREFIX 's3://my-bucket/3'
...
...
LOAD DATA FROM S3 PREFIX 's3://my-bucket/9'
According to the docs, it does not appear you can use wildcard * in prefix string. I'm at a loss as to why this isn't behaving as expected.
Update, figured out the issue. The files were being overwritten/replaced as part of an update process. If a file/object was in the middle of being written to then the LOAD from S3 would stop on that file. The solution was to prefix the updates with a timestamp instead of writing on top of the same file names over and over.
Part One :
I tried glue crawler to run on dummy csv loaded in s3 it created a table but when I try view table in athena and query it it shows Zero Records returned.
But the demo data of ELB in Athena works fine.
Part Two (Scenario:)
Suppose I Have a excel file and data dictionary of how and what format data is stored in that file , I want that data to be dumped in AWS Redshift What would be best way to achieve this ?
I experienced the same issue. You need to give the folder path instead of the real file name to the crawler and run it. I tried with feeding folder name to the crawler and it worked. Hope this helps. Let me know. Thanks,
I experienced the same issue. try creating separate folder for single table in s3 buckets than rerun the glue crawler.you will get a new table in glue data catalog which has the same name as s3 bucket folder name .
Delete Crawler ones again create Crawler(only one csv file should be not more available in s3 and run the crawler)
important note
one CSV file run it we can view the records in Athena.
I was indeed providing the S3 folder path instead of the filename and still couldn't get Athena to return any records ("Zero records returned", "Data scanned: 0KB").
Turns out the problem was that the input files (my rotated log files automatically uploaded to S3 from Elastic Beanstalk) start with underscore (_), e.g. _var_log_nginx_rotated_access.log1534237261.gz! Apparently that's not allowed.
The structure of the s3 bucket / folder is very important :
s3://<bucketname>/<data-folder>/
/<type-1-[CSVs|Parquets etc]>/<files.[csv or parquet]>
/<type-2-[CSVs|Parquets etc]>/<files.[csv or parquet]>
...
/<type-N-[CSVs|Parquets etc]>/<files.[csv or parquet]>
and specify in the "include path" of the Glue Crawler:
s3://<bucketname e.g my-s3-bucket-ewhbfhvf>/<data-folder e.g data>
Solution: Select path of folder even if within folder you have many files. This will generate one table and data will be displayed.
So in many such cases using EXCLUDE PATTERN in Glue Crawler helps me.
This is sure that instead of directly pointing the crawler to the file, we should point it to the directory and even by doing so when we do not get any records, Exclude Pattern comes to rescue.
You will have to devise some pattern by which only the file which u want gets crawled and rest are excluded. (suggesting to do this instead of creating different directories for each file and most of the times in production bucket, doing such changes is not feasible )
I was having data in S3 bucket ! There were multiple directories and inside each directory there were snappy parquet file and json file. The json file was causing the issue.
So i ran the crawler on the master directory that was containing many directories and in the EXCLUDE PATTERN i gave - * / *.json
And this time, it did no create any table for the json file and i was able to see the records of the table using Athena.
for reference - https://docs.aws.amazon.com/glue/latest/dg/define-crawler.html
Pointing glue crawler to the S3 folder and not the acutal file did the trick.
Here's what worked for me: I needed to move all of my CSVs into their own folders, just pointing Glue Crawler to the parent folder ('csv/' for me) was not enough.
csv/allergies.csv -> fails
csv/allergies/allergies.csv -> succeeds
Then, I just pointed AWS Glue Crawler to csv/ and everything was parsed out well.