I want to get old CTR data from Amazon Kinesis, but I am not sure how to do it.
(Not the real-time data, a data that is old like 1 day or a few hours, As Amazon Kinesis stores data for around 360 days.)
thanks in Advance.
You can do this with AWS Lambda. I mention it since it's included as a tag in the question.
When you configure Kinesis as an event source for Lambda there are three options 'Latest', 'Trim Horizon' or 'At timestamp'. It sounds like you want to use 'At timestamp'.
Related
I am in a situation where I need to Extract the log JSON data, which might have changes in its data structure, to AWS S3 in real time manner.
I am thinking of using AWS S3 + AWS Glue Streaming ETL. The thing is the structure or schema of the log JSON data might change(these changes are unpredictable), so my solution needs to be aware of such changes and should still stream the log data smoothly without causing errors... But as far as I know, all the AWS Glue tutorials are showing the demo as if there is no changes in the structure of the incoming data.
Can you recommend or tell me the solution within AWS that's suitable for my case?
Thanks.
I would like to ask you about getting some advice about handling many application events on AWS. My application sends a lot of different events about everything what a user did in real-time. For collecting those events, I’m using AWS firehose (kinesis) - I have few data streams where I push some different events. Some events, before storing on S3/Redshift contains data which I want to extract and store to other databases (DynamoDB) or to other S3 files — for that case I’m using lambda which is assigned to a specific stream.
My problem is that business adds more and more new events which they need to collect or do something with data and for every new event or „group” events I need create separate data stream + s3/rs/es + lambda for extracting data. Also, events on S3 are stored in one format and there is not possible to group that events e.g. by userId from an application or even name of the event in the stream filename. Ideal s3 with that events would look like events/{user_id}/{date}/{event-name}{timestamp}.json.
Maybe I’m wrong using firehose or I have wrong thinking about firehose in my case, maybe there are other, better services on AWS for my case which can give me more control. Maybe simple SQS + lambdas as a listener on S3 is better solution in this case?
Thanks for any advice.
EDIT 12th Nov 2020
This was supposed to be a comment for #Lina, but it was too long to put a comment, so I updated my question with the solution which I pick.
I resolved my issue as I "felt", so it may not be a good way to repeat, but: I've written a nodejs routing application which I connected on firehose and I wrote a few microservices where data is sent from firehose by my routing app. So now, I have a firehose tube and I'm taking 10 different event types. When some event came, my routing application decides what microservice should be run with what data based on the event type (the raw firehose event is still stored on s3 automatically). This gives me needed flexibility as I can extract specific data from the event, do with that data what I need, by running every other microservices from the whole system and still have a raw event in the s3 in case of needed revert history of events.
Some of the events are not passing to any service, it is just stored as a raw s3 file e.g. application logs - I can do many things with that files on S3 PUT/CREATE event.
I hope that it will help someone with a similar problem.
Ive been looking at AWS Kinesis analytics for some time now, and I struggle to make a following scenario work:
Say you have a Firehose that is connected to Kinesis Analytics as input stream.
This firehose is meant to output data every 60s.
I write data into this Firehose each 2s or so like:
{"value":1}.
Where the 1 is a random integer (can be 1, 5, 32 and so on).
How can I write my analytics to say find average value of all values reported in last 60s and pass it to a lambda?
I tried using various windows - however the Analytics seems to output the data every few seconds, instead of once 60s.
please have a look into the following links which have detailed examples how to achieve this.
https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-avg.html
https://docs.aws.amazon.com/kinesisanalytics/latest/sqlref/sql-reference-windowed-aggregation-stream.html
I'm new to AWS and would like some guidance.
I want to process the oldest unprocessed record but I cannot seem to get the params right.
Current Architecture
For the shard iterator:
I've tried TRIM_HORIZON which gave me all the records since the
beginning.
I've also tried LATEST which only gave me the one latest record.
Not sure if these additional details will help but...
I'm putting my own records in through Lambda on the AWS console
I'm debugging this by looking at the log files in CloudWatch
I'm getting records through the shard iterator (TRIM_HORIZON and LATEST)
My getRecords limit is set at 100
Thanks in advance!
There is no "oldest unprocessed record", as Kinesis doesn't know what you've processed (for example, you may have fetched the records but not done anything with them).
If you're using Kinesis, I strongly recommend using Kinesis Client Library, which has the concept of checkpoints - these are essentially a nice wrapper on top of ShardIterator AFTER_SEQUENCE_NUMBER, which translates to "oldest uncheckpointed record" - or as close as you'll get to "oldest unprocessed record".
(You could always implement this logic yourself, but why not reuse work that Amazon has already done for you?)
Considering the fact that there is no data pipeline available in Singapore region, are there any alternatives available to efficiently push csv data to dynamodb?
If it was me, I would setup an s3 event notification on a bucket that fires a lambda function each time a CSV file was dropped into it.
The Notification would let Lambda know that a new file was available and a lambda function would be responsible for loading the data into dynamodb.
This would work better (because of the limits of lambda) if the CSV files were not huge, so they could be processed in a reasonable amount of time, and the bonus is the only worked that would need to be done once it was working would be to simply drop the new files into the right bucket - no server required.
Here is a github repository that has a CSV->Dynamodb loader written in java - it might help get you started.