Amazon S3 notification for file change - amazon-web-services

Initially a csv file is uploaded to S3 bucket and we often append that file by scripting when new row is added to that csv file. what we want is we want the script to run only when the csv file is modified, is there any watchers which can notify the script to run when the csv file is changed?

There is S3 event notification for that, you would be interested in the s3:ObjectCreated event
https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
You should also take a look at the s3 documentation and note the difference between S3 and a File system. An "update" or "append" operation on s3 is actually replacing the whole object, just for your information

Related

AWS Lambda avoid recursive trigger

I'm downloading data from an API and writing it to a csv file that I store in an S3 bucket. I'm then copying my file from this input bucket into an output bucket with a Lambda function. From the output bucket I'm ingesting it into a MySQL RDS instance with another Lambda function.
The copy-to-another-bucket and upload-to-RDS lambda functions both get triggered when I create a new object in a bucket. Since I'm appending to my csv file, the upload-to-RDS function gets triggered way more than it should and I end up with ~30 rows in my database instead of 6.
I thought by copying the files between S3 buckets I could avoid this, but it doesn't help. Is there any way to only upload the csv file to the database once it has been written and not while it's being updated? Can I delay the trigger maybe?
The only other solution I can think of is to skip the copy-to-another-bucket function altogether and to schedule the upload-to-RDS function.
You need to realize that S3 doesn't support updating an existing file. If you are appending a row to an existing CSV file in S3, then that operation requires uploading the entire contents of the CSV file to S3 again, which S3 sees as a new object.
If you need to store a temporary version of the CSV file in S3 while you are updating it, then you should store it in a separate path, like s3://your_bucket/tmp and then when you have completed your updates, move it to the final path like s3://your_bucket/complete and only configure the Lambda trigger on the /complete path.

No extension while using from_options' in DynamicFrameWriter in AWS Glue spark context

I am new to AWS. I am writing **AWS Glue job** for some transformation and I could do it. But now after the transformation I used **'from_options' in DynamicFrameWriter Class** to transfer the data frame as csv file. But the file copied to S3 without any extension. Also is there any way to rename the file copied, using DynamicFrameWriter or any other. Please help....
Step1: Triggered an AWS glue job for trnsforming files in S3 to RDS instance..
Step2: On successful job completion transfer the contents of file to another S3 using from_options' in DynamicFrameWriter class. But the file dosen't have any extension.
you have to set the format of the file you are writing.
eg: format=csv
This should set the csv file extension.. You however cannot choose the name of the file that you want to write it as. The only option you have is to have some sort of s3 operation where you change the key name of the file.

AWS S3: .csv file is downloaded as .csv

I have 2 AWC accounts, each of them has one S3 bucket. I uploaded two same-size .CSV files to each of the S3 bucket.
When I try to Download or Download As, this file is downloaded as .CSV file in first account. BUT(!!) When I try to download this file from second account - it is downloading it as .TXT.
How can this happen? Both files are created in the same way: through Redshift UNLOAD query, that perform copying of selected data from Redshift to S3.
UPDATE:
Can it be because in this account for this document , **Server side encryption is equal to AWS-KMS?
I noticed that file, that converted from .csv to .txt has "Server side encryption: AWS-KMS", while .csv file that is downloaded as .csv - has "Server side encryption: NONE"
UPDATE: tried in different browsers - same result
Check the headers for each object in the AWS S3 console and compare the Content-Type values. Content-Type provides a hint to web browsers on what data the object contains.
If Content-Type does not exist or does not contain text/csv, add or modify the header in the S3 console or via your favorite S3 application such as CloudBerry.
John is right about the Content-Type not being text/csv. Sometimes, S3 will get it right and sometimes it won't. If you can't manually correct this yourself, you can run a Lambda function to do this for you everytime you upload a new object. You can use a Python 2.7 template Lambda function to download the object from the bucket, employ mimetypes library to guess_type for your S3 object, and then re-upload the file in the same bucket. You will need to trigger this function with S3 object upload and give it the necessary permissions (S3:GetObject).
P.S. This will work for files with any extension. If you know you are only going to upload .csv files, you can ignore the mimetypes and directly re-upload the object with
bucket.upload_fileobj(filename, key, ExtraArgs={'ContentType': 'text/csv'})
If the mimetypes cannot guess the typethen you might need to add the types, look at an example here https://www.programcreek.com/python/example/5209/mimetypes.add_type
Good Luck!
Here is scala solution (to specify content type):
val settingsLine: String = "csvdata1,csvdata2,csvdata3"
val settingsStream: InputStream = new ByteArrayInputStream(settingsLine.getBytes())
val metadata: ObjectMetadata = new ObjectMetadata()
metadata.setContentType("text/csv")
s3Client.putObject(bucketName, prefix, settingsStream, metadata)

S3 bucket script to add timestamp in filename on upload

I'm looking for a way to add a timestamp in every file that is uploaded to an S3 bucket, Amazon-side. There is, of course, an option to do this client-side before the upload, but I don't think this is as nice and clean as it would be to have some script to run in the bucket itself everytime a new file is uploaded. I didn't find anything in the docs, though.
There is no capability within Amazon S3 to change the Key (filename) of a file based upon upload time.
Given that your desire is to avoid name conflicts, some choices are:
Use a unique GUID or a timestamp to name the file when uploading. This will avoid naming conflicts.
Upload the file to Bucket A, then use a Lambda function triggered on ObjectCreation to copy the object to Bucket B with a unique name based on timestamp
You can try with a lambda function handling the ObjectCreated event. See this tutorial.
Not sure that works though.

aws s3 replace file atomically

Environment
I copied a file, ./barname.bin, to s3, using the command aws s3 cp ./barname.bin s3://fooname/barname.bin
I have a different file, ./barname.1.bin that I want to upload in place of that file
How can I upload and replace (overwrite) the file at s3://fooname/barname.bin with ./barname.1.bin?
Goals:
Don't change the s3 url used to access the file (new file should also be available at s3://fooname/barname.bin).
zero/minimum 'downtime'/unavailability of the s3 link.
As I understand it, you've got an existing file located at s3://fooname/barname.bin and you want to replace it with a new file. To replace that, you should just upload a new one on top of the old one:
aws s3 cp ./barname.1.bin s3://fooname/barname.bin.
The old file will be replaced. According to the S3 docs, this is atomic, though due to EC2s replication pattern, requests for the key may still return the old file for some time.
Note (thanks #Chris Kuehl): though the replacement is technically atomic, it's possible for multipart downloads to end up with chunks from different versions of the file. 😬