I have the following work flow.
I need to identify duplicate files on S3 in order to avoid duplicates on my destination ( Redshift ).
Load files to S3 every 4 hours from FTP Server ( File storage structure : year/month/date/hour/minute/filename)
Load S3 to Redshift once all of the files are pulled ( for that interval )
This is a continuous job that will be running every 4 hour.
Problem :
Some times the files with same content but different file names are present on S3. These files can belong to different intervals or different days. For example if a files arrives say one.csv on 1st Oct 2018 and contains 1,2.3,4 as a content then it is possible that on 10th Oct 2018 a file may arrive with same content 1,2,3,4 but with different file name.
I want to avoid to load this file to S3 if the contents are same.
I know that I can use file hash to identify the two identical files, But my problem is how to achieve this on S3 and that too with so much of files.
What will be the best approach to proceed ?
Bascially, I want to avoid loading of data to S3 that is already present.
You can add another table in redshift ( or anywhere else actually like MySQL or dynamodb ) which will contain Etag/md5 hash of files uploaded.
You might already be having a script which is running every 4 hours and is loading data into redshift. In this same script, after data is loaded successfully into redshift; just make an entry into this table. Also, put a check in this same script(from this new table) before loading data into Redshift.
You need to make sure, that you load this new table with all the Etags of files you have already loaded into redshift.
Related
I'm creating an app that works with AWS Athena on compressed Parquet (SNAPPY) data.
It works almost fine, however, after every query execution, 2 files get uploaded to the S3_OUTPUT_BUCKET of type csv and metadata. (as it should)
These 2 files break the execution of the next query.
I get the following error:
HIVE_BAD_DATA: Not valid Parquet file: s3://MY_OUTPUT_BUCKET/logs/QUERY_NAME/2022/08/07/tables/894a1d10-0c1d-4de1-9e61-13b2b0f79e40.metadata expected magic number: PAR1 got: HP
I need to manually delete those files for the next query to work.
Any suggestions on how to make this work?
(I know I cannot exclude those files with a regex etc.. but I don't want to delete the files manually for the app to work)
I read everything about the output files but it didn't help. ( Working with query results, recent queries, and output files )
Any help is appreciated.
While setting up Athena for execution, we need to specify where the metadata and csv from the query execution are written into. This needs to be written into a different folder than the table location.
Go to Athena Query Editor > Settings > Manage
and edit Query Result Location to be another S3 bucket than the table or a different folder within the same bucket.
I have multiple S3 files in a bucket.
Input S3 bucket :
File1 - 2GB data
File 2 - 500MB data
File 3 - 1Gb Data
file 4 - 2GB data
and so on. Assume there are 50 such files. Data within files is of same schema, lets say attribute1, attribute 2.
I want to merge these files and output into a new bucket as follows, such that each file is less than 1GB in same schema as before.
Files 1 - < 1GB
Files 2 - < 1GB
Files 3 - < 1GB
I am looking for AWS based solutions which I can deliver using AWS CDK. I was considering following two solutions :
AWS Athena - reads and writes to S3 but not sure if I can set up a 1GB limit while writing.
AWS Lambda - read file sequentially, store in memory, when size is near 1GB, write to new file in s3 bucket. Repeat until all files completed. I'm worried about the 15 min timeout, not sure if lambda will be able to process.
Expected scales -> Overall file input size sum : 1 TB
What would be a good way to go about implementing this? Hope I have phrased the question right, I'd be happy to comment if any doubts.
Thanks!
Edit :
Based on a comment ->
Apologies for calling it a merge. More of a reset. All files have the same schema, placed in csv files. In terms of pseudo code
List<Files> listOfFiles = ReadFromS3(key)
New file named temp.csv
for each file : listOfFiles :
append file to temp.csv
List<1GBGiles> finalList = Break down temp.csv into sets of 1GB each
for(File file : finalList)
writeToS3(finalList)
Amazon Athena can run a query across multiple objects in a given Amazon S3 path, as long as they all have the same format (eg same columns in a CSV file).
It can store the result in a new External Table, with a location pointing to an S3 bucket, by using a CREATE TABLE AS command and a LOCATION parameter.
The size of the output files can be controlled by setting the number of output buckets (which is not the same as an S3 bucket).
See:
Bucketing vs partitioning - Amazon Athena
Set the number or size of files for a CTAS query in Amazon Athena
If your process includes ETL(Extraction Transformation Load) post process, you could use AWS GLUE
Please find here an example for Glue using s3 as a source.
If you’d like to use it with Java SDK, the best starting points are:
the Glue GitHub repo
The aws Java code sample catalog for Glue
Out of all of them your the Tutorial to create a crawler (that you can find in GitHub as per above url) should match your case as it crawls an S3 bucket and put it in a glue catalog for transformation.
I have ~200.000 s3 files that I need to partition, and have made an Athena query to produce a target s3 key for each of the original s3 keys. I can clearly create a script out of this, but how to make the process robust/reliable?
I need to partition csv files using info inside each csv so that each file is moved to a new prefix in the same bucket. The files are mapped 1-to-1, but the new prefix depends on the data inside the file
The copy command for each would be something like:
aws s3 cp s3://bucket/top_prefix/file.csv s3://bucket/top_prefix/var1=X/var2=Y/file.csv
And I can make a single big script to copy all through Athena and bit of SQL, but I am concerned about doing this reliably so that I can be sure that all are copied across, and not have the script fail, timeout etc. Should I "just run the script"? From my machine or better to put it in an ec2 1st? These kinds of questions
This is a one-off, as the application code producing the files in s3 will start outputting directly to partitions.
If each file contains data for only one partition, then you can simply move the files as you have shown. This is quite efficient because the content of the files do not need to be processed.
If, however, lines within the files each belong to different partitions, then you can use Amazon Athena to 'select' lines from an input table and output the lines to a destination table that resides in a different path, with partitioning configured. However, Athena does not "move" the files -- it simply reads them and then stores the output. If you were to do this for new data each time, you would need to use an INSERT statement to copy the new data into an existing output table, then delete the input files from S3.
Since it is one-off, and each file belongs in only one partition, I would recommend you simply "run the script". It will go slightly faster from an EC2 instance, but the data is not uploaded/downloaded -- it all stays within S3.
I often create an Excel spreadsheet with a list of input locations and output locations. I create a formula to build the aws s3 cp <input> <output_path> commands, copy them to a text file and execute it as a batch. Works fine!
You mention that the destination depends on the data inside the object, so it would probably work well as a Python script that would loop through each object, 'peek' inside the object to see where it belongs, then issue a copy_object() command to send it to the right destination. (smart-open · PyPI is a great library for reading from an S3 object without having to download it first.)
I have reports delivered to S3 in the following structure:
s3://chum-bucket/YYYY/MM/DD/UsageReportYYYYMMDD.zip
s3://chum-bucket/YYYY/MM/DD/SearchReportYYYYMMDD.zip
s3://chum-bucket/YYYY/MM/DD/TimingReportYYYYMMDD.zip
The YYYY MM DD vary per day. The YYYMMDD in the filename is there because the files all go into one directory on a server before they are moved to S3.
I want to have 1 or 3 crawlers that deliver 3 tables to the catalog, one for each type of report. Is this possible? I can't seem to specify
s3://chum-bucket/**/UsageReport*.zip
s3://chum-bucket/**/SearchReport*.zip
s3://chum-bucket/**/TimingReport*.zip
I can write one crawler that excludes SearchReport and TimingReport, and therefore crawls the UsageReport only. Is that the best way?
Or do I have to completely re-do the bucket / folder / file name design?
Amazon Redshift loads all files in a given path, regardless of filename.
Redshift will not take advantage of partitions (Redshift Spectrum will, but not a normal Redshift COPY statement), but it will read files from any subdirectories within the given path.
Therefore, if you want to load the data into separate tables (UsageReport, SearchReport, TimingReport), the they need to be in separate paths (directories). All files within the designated directory hierarchy must be in the same format and will be loaded into the same table via the COPY command.
An alternative is that you could point to a specific file using manifest files, but this can get messy.
Bottom line: Move the files to separate directories.
I have data being written from Kafka to a directory in s3 with a structure like this:
s3://bucket/topics/topic1/files1...N
s3://bucket/topics/topic2/files1...N
.
.
s3://bucket/topics/topicN/files1...N
There is already a lot of data in this bucket and I want to use AWS Glue to transform it into parquet and partition it, but there is way too much data to do it all at once. I was looking into bookmarking and it seems like you can't use it to only read the most recent data or to process data in chunks. Is there a recommended way of processing data like this so that bookmarking will work for when new data comes in?
Also, does bookmarking require that spark or glue has to scan my entire dataset each time I run a job to figure out which files are greater than the last runs max_last_modified timestamp? That seems pretty inefficient especially as the data in the source bucket continues to grow.
I have learned that Glue wants all similar files (files with same structure and purpose) to be under one folder, with optional subfolders.
s3://my-bucket/report-type-a/yyyy/mm/dd/file1.txt
s3://my-bucket/report-type-a/yyyy/mm/dd/file2.txt
...
s3://my-bucket/report-type-b/yyyy/mm/dd/file23.txt
All of the files under report-type-a folder must be of the same format. Put a different report like report-type-b in a different folder.
You might try putting just a few of your input files in the proper place, running your ETL job, placing more files in the bucket, running again, etc.
I tried this by getting the current files working (one file per day), then back-filling historical files. Note however, that this did not work completely. I have been getting files processed ok in s3://my-bucket/report-type/2019/07/report_20190722.gzp, but when I tried to add past files to 's3://my-bucket/report-type/2019/05/report_20190510.gzip`, Glue did not "see" or process the file in the older folder.
However, if I moved the old report to the current partition, it worked: s3://my-bucket/report-type/2019/07/report_20190510.gzip .
AWS Glue bookmarking works only with a select few formats (more here) and when read using glueContext.create_dynamic_frame.from_options function. Along with this job.init() and job.commit() should also be present in the glue script. You can checkout a related answer.