I'm trying to trigger AWS Step Function whenever a new file is uploaded on S3 bucket. I'm using Cloudwatch rules to do this but I'm getting this warning
I tried to follow AWS documentation link "https://docs.aws.amazon.com/step-functions/latest/dg/tutorial-cloudwatch-events-s3.html#tutorial-cloudwatch-events-s3-step-1" but state machine did not invoked.
Can anyone tell me what exactly I'm doing wrong?
EDIT
I created this trail and region is Ohio
I found the issue, we need to enable data events as well to get API calls for S3. It was not mentioned in above AWS Document.
Related
From AWS Management Console, I have created AWS services like S3 bucket and Cloudwatch event. I have set triggers of it to AWS Lambda function from browser window itself.
Before 14-Jul-21, eventually, triggers are ineffective.
Please review below output screenshot of cloudwatch event. Invocation is failing. I believe root cause being implicit permission only.
AWS documentation screenshots about permissions are as below.
Please suggest next plan of action to approach a resolution.
Sincerely,
prakash
I have a usecase where I want to put data into an S3 bucket, for it to read later, by another account. I only want the other account to be able to read the file in S3, and once they have read it, I will then delete the file myself.
I have been reading the S3 documentation, and cannot see they cover this usecase: of sending a notification when a file in an S3 bucket is read ?
Can anyone help, or suggest an alternative workflow ? I have been looking at AWS SNS and was wondering if that would be a better solution ?
You could use CloudTrail and CloudWatch Events to enable this workflow.
By default S3 API calls are not logged so you'd want to enable that following the instructions here.
Then enable a CloudWatch event rule for the Simple Storage Service where the "GetObject" operation occurs.
Have this event invoke a Lambda function that will remove the object.
More information available here.
I'm trying to generate a cloudformation stack provided by AWS here. When I click the Create a Cognito User with CloudFormation button, it directs me to AWS console CloudFormation page on us-west-2 (Oregon), from there its pretty much self explanatory. The problem is, the company that I'm working on only allows work on us-west-1 (N. California). I have tried looking over the CloudFormation template itself and I cant find any region being mentioned. I have also asked this question in AWS developer forum but no one has responded, and I'm wondering if anyone here knows how to generate that particular stack on any region other than us-west-2 (oregon)? Thanks!
I found a workaround for that. I used to face the same problem, as my company policy was set to not use us-west-2, therefore I couldn't use the CloudFormation JSON script provided by Amazon Kinesis Data Generator.
What I did was:
Download CloudFormation JSON script by Amazon Kinesis Data Generator in your local machine. CloudFormation JSON script download link can be found Amazon Kinesis Data Generator Help page
Download the source code. The source code download link can be found in Amazon Kinesis Data Generator Help page.
In your AWS account, go to S3 and create a S3 bucket in the region that you are allowed to create. Name it whatever you want.
Upload the source code downloaded in step2 to the created bucket in step3.
Edit CloudFormation JSON script downloaded in step1. Inside of script, change bucket name inside of Lambda function to the name of bucket you created in step3.
Go to CloudFormation and create the stack by uploading your edited script.
One thing that you need to keep in mind implementing this workaround is that if there are any changes to source code by AWSLAB, or any newer version of source code comes to life, you will have to manually check and update it to your bucket.
I hope it was clear.
I have created JMeter plugin to publish data records in Kinesis Data Stream.
https://github.com/JoseLuisSR/awsmeter
It works very well and you don't need use any aditional AWS service to publish event in Kinesis as Kinesis Data Generator does, where you could pay aditional charges for services like Cognito, Cloudformation, Lambda that are need to build and deploy KDG.
You just need AWS IAM user with programmatic access, download JMeter and install awsmeter plugin.
If you have questions or comments let me know.
Thanks.
I am new to AWS CloudTrail. I have gone through number of aws docs and unable to figure out how to read cloudtrails last 7 days logs through program without configuring trail or without getting charged.
I want to write a java program which will read audit logs from aws and process those logs. I know we can create trail and we can read logs from aws s3 bucket using program, but I don't know how to read logs using aws sdk api for last 7 days like how we get logs on aws console ( we can read last 7 days audit logs free of cost.).
We can get this done using - cloudtrail-processing-library, but the properties/conf file for this lib requires sqs url as argument which i don't have, rather I don't know.
Please assist me so that I can write java program.
Regards,
Sachin
You can use the lookupEvents API in cloudtrail for getting the list of events (any create/update/delete operations).
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/cloudtrail/AWSCloudTrail.html#lookupEvents-com.amazonaws.services.cloudtrail.model.LookupEventsRequest
The logs are stored in a S3 bucket and you can use AWS athena to process and query the logs if you want, so you don't have to write a Java program. If you do then that program will need IAM privileges to read from the S3 bucket that stores the logs.
AWS Athena
How to find your Cloud trail logs
Java code examples on S3 bucket objects
Java Cloudtrail SDK reference
I am not able to upload Logs in Cloudwatch to S3 bucket through Amazon Console. As it is showing the following error message. Can any one please help me.
"One or more of the specified parameters are invalid e.g. Time Range etc"
Probably you are using an S3 bucket with encryption. This error is shown when the export task to S3 fails due to the fact that CloudWatch Logs export task doesn't support encryption on server side yet.
(I reproduced this).
In my case, it was wrong access permissions configured on the bucket policy. It works with AES-256 encryption enabled in my test run.