I am looking for a way to trigger the Jenkins job whenever the s3 bucket is updated with a particular file format.
I have tried a lambda function method with an "Add trigger -> s3 bucket PUT method". I have followed this article. But it's not working. I have explored and I have found out that "AWS SNS" and "AWS SQS" also can use for this, but the problem is some are saying this is outdated. So which is the simplest way to trigger the Jenkins job when the s3 bucket is updated?
I just want a trigger, whenever the zip file is updated from job A in jenkins1 to the S3 bucket name called 'testbucket' in AWS enviornment2. Both Jenkins are in different AWS accounts under seperate private VPC. I have attached my Jenkins workflow as a picture. Please refer below picture.
The approach you are using seems solid and a good way to go. I'm not sure what specific issue you are having so I'll list a couple things that could explain why this might not be working for you:
Permissions issue - Check to ensure that the Lambda can be invoked by the S3 service. If you are doing this in the console (manually) then you probably don't have to worry about that since the permissions should be automatically setup. If you're doing this through infrastructure as code then it's something you need to add.
Lambda VPC config - Your lambda will need to run out of the same subnet in your VPC that the Jenkins instance runs out of. Lambda by default will not be associated to a VPC and will not have access to the Jenkins instance (unless it's publicly available over the internet).
I found this other stackoverflow post that describes the SNS/SQS setup if you want to continue down that path Trigger Jenkins job when a S3 file is updated
Related
Fairly new to cloudformation templating but all I am looking to create a template where I create a S3 bucket and import contents into that bucket from another S3 bucket from a different account (that is also mine). I realize CloudFormation does not natively supports importing contents into S3 bucket, and I have to utilize custom resource. I could not find any reference/resources that does such tasks. Hoping someone could point out some examples or maybe even some guidance as to how to tackle this.
Thank you very much!
Can't provide full code, but can provide some guidance. There are few ways of doing this, but I will list one:
Create a bucket policy for the bucket in the second account. The policy should allow the first account (one with cfn) to read it. There are many resources on doing this. One from AWS is here.
Create a standalone lambda function in the first account with execution role allowing it to the read bucket from the second account. This is not a custom resource yet. The purpose of this lambda function is to test the cross-account permissions, and your code which reads objects from it. This is like a test function to sort out all the permissions and polish object copying code from one bucket to other.
Once your lambda function works as intended, you modify it (or create new one) as a custom resource in CFN. As a custom resource, it will need to take your newly created bucket in cfn as one of its arguments. For easier creation of custom resources this aws helper can be used.
Note, that the lambda execution timeout is 15 minutes. Depending on how many objects you have, it may be not enough.
Hope this helps.
If Custom Resources scare you, then a simpler way is to launch an Amazon EC2 instance with a startup script specified via User Data.
The CloudFormation template can 'insert' the name of the new bucket into the script by referencing the bucket resource that was created. The script could then run an AWS CLI command to copy the files across.
Plus, it's not expensive. A t3.micro instance is about 1c/hour and it is charged per second, so it's pretty darn close to free.
In my company we are storing log files in cloudwatch and then after 7days it will get sent to s3 however I have trouble finding exactly where log files are being stored in s3.
Since process of moving from cloudwatch to s3 is automated I've followed https://medium.com/tensult/exporting-of-aws-cloudwatch-logs-to-s3-using-automation-2627b1d2ee37 in hope to find the path.
We are not using step functions so I've check lambda services however there were no function that move log file from cloudwatch to s3.
I've tried looking at cloudwatch rules in hope to fine something like:
{
"region":"REGION",
"logGroupFilter":"prod",
"s3BucketName":"BUCKET_NAME",
"logFolderName":"backend"
}
so I can find which bucket log files are going to and into which folder.
How can I find where my logs are stored, if moving data is being automated why is there no functions visible?
addtional note: I am new to aws, if there is good resource on aws architecture please recommend.
Thanks in advance!
If the rule exists or was created properly then you must see it in the AWS console and same is the true for S3 bucket.
One common problem when it comes to visibility of an asset in AWS console is wrong region selection. So verify in which region the rule and the S3 bucket was created, if they were ever created and selecting the right region on the top right corner should show the assets in that region.
Hope it helps!
Have you tried using the View all exports to Amazon S3 in the CloudWatch -> >Logs console. It is one of the items in the Actions menu.
Objective:
Whenever an object is stored in the bucket, trigger a batch job (aws batch) and pass the uploaded file url as an environment variable
Situation:
I currently have everything set up. I've got the s3 bucket with cloudwatch triggering batch jobs, but I am unable to get the full file url or to set environment variables.
I have followed the following tutorial: https://docs.aws.amazon.com/batch/latest/userguide/batch-cwe-target.html "To create an AWS Batch target that uses the input transformer".
The job is created and processed in AWS batch, and under the job details, i can see the parameters received are:
S3bucket: mybucket
S3key: view-0001/custom/2019-08-07T09:40:04.989384.json
But the environment variables have not changed, and the file URL does not contain all the other parameters such as access and expiration tokens.
I have also not found any information about what other variables can be used in the input transformer. If anyone has a link to a manual, it would be welcome.
Also, in the WAS CLI documentation, it is possible to set the environment variables when submitting a job, so i guess it should be possible here as well? https://docs.aws.amazon.com/cli/latest/reference/batch/submit-job.html
So the question is, how to submit a job with the file url as an environment variable?
You could accomplish this by triggering a Lambda function off the bucket and generating a pre-signed URL in the Lambda function and starting a Batch job from the Lambda function.
However, a better approach would be to simply access the file within the Batch function using the bucket and key. You could use the AWS SDK for your language or simply use awscli. For example you could download the file:
aws s3 cp s3://$BUCKET/$KEY /tmp/file.json
On the other hand, if you need a pre-signed URL outside of the Batch function, you could generate one with the AWS SDK or awscli:
aws s3 presign s3://$BUCKET/$KEY
With either of these approaches with accessing the file within the Batch job, you will need to configure the instance role of your Batch compute environment with IAM access to your S3 bucket.
I have an AWS Lambda function which fully works when tested locally using ATOM, within this it reads and writes to my s3 bucket. However when I upload the function to Lambda it doesn't seem to have access to S3. Whenever I try to read from S3 it simply times out after 3 minutes, even on simple requests like listing buckets.
I have increased the access of "Lambda Basic Execution" to have full admin access, and it still doesn't work.
Any ideas would be greatly appreciated.
According to information provided you have troubles with communication between deployed to server lambda and S3. When running on the local machine, where you have direct access to S3, there is no problem, because it is not isolated environment as in VPC.
Please check your network configuration (especially when lambda runs in VPC, for tests purposes you can disable that in AWS console -> Lambda -> yourLambdaFunction -> Network -> choose No VPC). Image below shows the lambda config for those tests:
Lambda console no VPC config
It was mentioned before. Answers included should solve your problem:
Lambda Timeout while communicating with S3
Please let us know if it helps.
I'd like to run some code using Lambda on the event that I create a new EC2 instance. Looking the blueprint config-rule-change-triggered I have the ability to run code depending on various configuration changes, but not when one is created. Is there a way to do what I want? Or have I misunderstood the use case of Lambda?
We had similar requirements couple of days back(Users were supposed to get emails whenever a new instance gets launched)
1) Go to cloudwatch, then select Rules
2) Select service name (its ec2 for your case) then select "Ec2 instance state-change notification"
3) Then select pending in "Specific state" dropdown
4) Click on Add target option and select your lambda function.
That's it, whenever a new instance gets launched, Cloudwatch will trigger your lambda function.
Hope it helps !!
You could do this by inserting code into your EC2 instance launch userdata and have that code explicitly invoke a Lambda function, but that's not the best way to do it.
A better way is to use a combination of CloudTrail and Lambda. If you enable CloudTrail logging (every a/c should have this enabled, all the time, in all regions) then CloudTrail will log to S3 all of the API calls made in your account. You then connect this to Lambda by configuring S3 to publish events to Lambda. Your Lambda function will receive an S3 event, can then retrieve the API logs, find RunInstances API calls, and then do whatever work you need to as a consequence of the new instance being launched.
Some helpful references here and here.
I don't see a notification trigger for instance startup, however what you can do is write a startup script and pass that in via userdata. That startup script would need to download and install the AWS CLI and then authenticate to SNS and publish a message to a pre-configured topic. The startup script would authenticate to SNS and whatever other AWS services are needed via your IAM Role, so you would need to give the IAM Role permission to do whatever you want the script to do. This can be done in the IAM console.
That topic would then have your Lambda function subscribed to it, which would execute. Similar to the below article (though the author is doing something similar for shutdown, not startup).
http://rogueleaderr.com/post/48795010760/how-to-notifyemail-yourself-when-an-ec2-instance
If you are putting the EC2 instances into an autoscale group, I believe there is a trigger that gets fired when the autoscale group launches a new instance, so you could take advantage of that.
I hope that helps.
If you have CloudTrail enabled, then you can have S3 PutObject/TrailBucket trigger a Lambda function. Lambda function parses the object that is passed to it and if it finds RunInstances event, then run your code.
I do the exact same thing to notify certain users when a new instance is launched. With Lambda/Python, it is ~20 lines of code.