I need help figuring out how to pass an array of s3 file names to my second aws lambda function I am working on. The first function would query the dynamodb index table for list of s3 files in glacier it would issue retrieval request to. the second lambda would get the same filename parameter in an array one at a time and invoke the lambda function but 4 hours later for each of the files that are retrieved from Glacier, is there any delicate way to do this in lambda or other AWS services using javascript? any help is appreciated, thanks!
Glacier retrieval jobs are not guaranteed to be complete within 4 hours (archives typically become accessible within 3–5 hours, but that's not a guarantee). Also, scheduling Lambda function invocations for some time in the future is not the best way to solve this problem.
You should make use of Glacier notifications. When a Glacier retrieval job completes, it can post a message to an SNS topic. SNS and Lambda are integrated so you can invoke Lambda functions from SNS notifications.
The Glacier SDK supports archive retrieval (and inventory retrieval) with SNS notifications at completion time via initiate_job().
EDIT: this does not work if the S3 objects were archived to Glacier via lifecycle management because retrieval notifications require you to supply a Glacier vault name but lifecycle management does not expose this vault name to you (it's internal to the AWS service). [Thanks #Mark B]
Related
I'm looking for a way to be notified when an object in s3 changes storage class. I thought there would be a bucket event notification for this but I don't see it as an option. How can I know when an object moves from STANDARD to GLACIER? We have systems that depend on objects not being in GLACIER. If they change to GLACIER, we need to be made aware and handle them accordingly.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/notification-how-to-event-types-and-destinations.html#supported-notification-event-types
You can use S3 access logs to capture changes in life cycle, but i think thats about it:
Amazon S3 server access logs can be enabled in an S3 bucket to capture
S3 Lifecycle-related actions such as object transition to another
storage class
Taken from AWS docs - life-cycle and other bucket config
You could certainly roll your own notifications for storage class transitions - might be a bit more involved than you are hoping for though.... You need a separate bucket to write your access logs. Setup an S3 notification for object creation in your new logs bucket to trigger a lambda function to process each new log file. In your lambda function use Athena to query the logs and fire off an SNS alert or perform some corrective action in code.
There are some limitations to be aware of though - see best effort logging means you might not get logs for a few hours
Updated 28/5/21
If the logs are on you should see the various lifecycle operations logged as they happen +/- a few hours. If not are you definitely meeting the minimum criteria for transitioning objects to glacier? (eg it takes 30 days to transition from standard to glacier).
As for:
The log record for a particular request might be delivered long after
the request was actually processed, or it might not be delivered at
all.
Consider S3's eventual consistency model and the SLA on data durability - there is possibility of data loss for any object in S3. I think the risk is relatively low of loosing log records, but it could happen.
You could also go for a more active approach - use s3 api from a lambda function triggered by cloudwatch events (cron like scheduling) to scan all the objects in the bucket and do something accordingly (send an email, take corrective action etc). Bare in mind this might get expensive depending on how often you run the lambda and how many object are in your bucket but low volumes might even be in the free tier depending on your usage.
As of Nov 2021 you can now do this via AWS EventBridge.
Simply create a new Rule on the s3 bucket that handles the Object Storage Class Changed event.
See https://aws.amazon.com/blogs/aws/new-use-amazon-s3-event-notifications-with-amazon-eventbridge/
I have gone through couple of stackoverflow questions regarding hourly backups from DDB to S3 where the best solution turned out to be to enable DDB Stream, subscribe lambda function and push to S3.
I am trying to understand if directly pushing from Lambda to S3 is fine or from Lambda to Kinesis Firehose and then to S3. Can someone share what is the advantage if we introduce Firehose in between. We anyways trigger lambda only after specific batch window that implies we are already buffering there.
Thanks in advance.
Firehose gives you the possibility to convert and compress your data. In addition you can directly attach a Glue Metadata table, so you can query your data with Athena.
You can write a Lambda function that reads a DynamoDB table, gets a result set, encodes the data to some format (ie, JSON), then place that JSON into an Amazon S3 bucket. You can use scheduled events to fire off the Lambda function on a regular schedule.
Here in AWS tutorial that shows you how to use scheduled events to invoke a Lambda function:
Creating scheduled events to invoke Lambda functions
This AWS tutorial also shows you how to read data from an Amazon DynamoDB table from a Lambda function.
I have a process that publishes data into a IoT-Core and that triggers a Lambda function that inserts the payload into an Amazon S3 bucket.
I have a process that send around 1.2 million records in some seconds, and when I check in the bucket I see I have lost around 10% of the data. If I set a sleep in the Lambda function it goes beyond 15 minutes.
What is the solution for this scenario?
It appears that your requirement is to capture the events coming into IoT-Core and save them to Amazon S3.
It also sounds like your Lambda functions are being throttled due to hitting concurrency limits and data is being lost. By default, there is a limit of 10,000 concurrent AWS Lambda functions. This could potentially be fixed by requesting an increase in the maximum number of concurrent functions.
Here is a diagram from How AWS IoT works:
As shown in the digram, the Rules engine can actually be used to send data to Amazon S3 without requiring Lambda. However, this creates a separate object in Amazon S3 for every message.
If you wish to combine messages together, you can Write to Kinesis Data Firehose Using AWS IoT. Firehose will buffer the data by time or size, and then output multiple messages to an Amazon S3 object. This could be a good way to handle large volumes of data, and it also makes it easier to work with the resulting objects in S3 because there are less objects created. This makes them faster to query and process later (eg with Amazon Athena).
Going from IoT-Core rule direct to a Lambda can be fragile.
You can use Kinesis to buffer the data or Firehose to stream it directly to S3. These are standard patterns that AWS recommend for IoT in the AWS Well-Architected framework (https://d1.awsstatic.com/whitepapers/architecture/AWS-IoT-Lens.pdf).
There is a service that generates data in S3 bucket that is used for warehouse querying. Data is inserted into S3 as daily mechanism.
I am interested in copying that data from S3 to my service account to further classify the data. The classification needs to happen in my AWS service account as it is based on information present in my service account. Classification needs to happens in my service account as it is specific to my team/service. The service generating the data in S3 is neither concerned about the classification nor has the data to make classification decision.
Each S3 file consists of json objects (record) in it. For every record, I need to look into a dynamodb table. Based on whether data exists in Dynamo table, I need to include an additional attribute to the json object and store the list into another S3 bucket in my account.
The way I am considering doing this:
Trigger a scheduled CW event periodically to invoke a Lambda that will copy the files from Source S3 bucket into a bucket (lets say Bucket A) in my account.
Then, use another scheduled CW event to invoke a Lambda to read the records in the json and compare with dynamodb table to determine classification and write to updated record to another bucket (lets say Bucket B).
I have few questions regarding this:
Are there better alternatives for achieving this?
Would using aws s3 sync in the first Lambda be a good way to achieve this? My concerns revolve around lambdas getting timed out due large amount of data, especially for the second lambda that needs to compare against DDB for every record.
Rather than setting up scheduled events, you can trigger the AWS Lambda functions in real-time.
Use Amazon S3 Events to trigger the Lambda function as soon as a file is created in the source bucket. The Lambda function can call CopyObject() to copy the object to Bucket-A for processing.
Similarly, an Event on Bucket-A could then trigger another Lambda function to process the file. Some things to note:
Lambda functions run for a maximum of 15 minutes
You can increase the memory assigned to a Lambda function, which will also increase the amount of CPU assigned. So, this might speed-up the function if it is taking longer than 15 minutes.
There is a maximum of 512MB of storage space made available for a Lambda function.
If the data is too big, or takes too long to process, then you will need to find a way to do it outside of AWS Lambda. For example, using Amazon EC2 instances.
If you can export the data from DynamoDB (perhaps on a regular basis), you might be able to use Amazon Athena to do all the processing, but that depends on what you're trying to do. If it is simple SELECT/JOIN queries, it might be suitable.
I need to start a Lambda Function when an object has been created on an S3 Bucket. I found 2 solutions to do this.
Using AWS::S3::Bucket NotificationConfiguration.
Using a CloudWatch AWS::Events::Rule.
They both seem to do exactly the same thing, which is to track specific changes and launch a Lambda Function when it happens. I could not find any information on which one should be used. I'm using Cloud Formation Template to provision the Lambda, the S3 Bucket and the trigger.
Which one should I use to call a Lambda on Object level changes and why?
Use the 1st one because of
A push model is much better than a pull model. Push means you send data when you get it instead of polling onto something for some set of interval. This is an era for push notifications all over us. You don't go to facebook to check every 5 minutes if someone has liked your picture or not OR someone has replied to your comment, etc.
In terms of cost and efforts also, S3 event notification wins the race.
Cloudwatch was the best option if you didn't have S3 notification but since you have it, that's the best. Plus if you have a feature in the service itself then why will you go for an alternative solution like Cloudwatch rules.