How to access AWS CodeBuild reports in a Lambda? - amazon-web-services

At the moment I have an EventBridge sending CodeBuild build phase updates that have status "FAILED" to a Lambda. Specifically - unit tests are run and then a report is created that contains information about all the tests that were run. The event that is received by my Lambda from CodeBuild contains ARN for the reports and I would like the Lambda to read that ARN, access the report and output what's gone wrong.
I can't seem to find a way to access the CodeBuild report within a Lambda - AWS CDK API reference doesn't seem to have anything for that within the CodeBuild sections. I have the ARN for the generated report, I just don't know how to make my Lambda read it.

https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/codebuild.html#CodeBuild.Client.batch_get_reports was exactly what I was looking for. Searched around and couldn't see it, and here it is! describe_test_cases() is exactly what I needed for this.
Thanks for the link #jingx

Related

Jenkins job trigger when s3 bucket updated

I am looking for a way to trigger the Jenkins job whenever the s3 bucket is updated with a particular file format.
I have tried a lambda function method with an "Add trigger -> s3 bucket PUT method". I have followed this article. But it's not working. I have explored and I have found out that "AWS SNS" and "AWS SQS" also can use for this, but the problem is some are saying this is outdated. So which is the simplest way to trigger the Jenkins job when the s3 bucket is updated?
I just want a trigger, whenever the zip file is updated from job A in jenkins1 to the S3 bucket name called 'testbucket' in AWS enviornment2. Both Jenkins are in different AWS accounts under seperate private VPC. I have attached my Jenkins workflow as a picture. Please refer below picture.
The approach you are using seems solid and a good way to go. I'm not sure what specific issue you are having so I'll list a couple things that could explain why this might not be working for you:
Permissions issue - Check to ensure that the Lambda can be invoked by the S3 service. If you are doing this in the console (manually) then you probably don't have to worry about that since the permissions should be automatically setup. If you're doing this through infrastructure as code then it's something you need to add.
Lambda VPC config - Your lambda will need to run out of the same subnet in your VPC that the Jenkins instance runs out of. Lambda by default will not be associated to a VPC and will not have access to the Jenkins instance (unless it's publicly available over the internet).
I found this other stackoverflow post that describes the SNS/SQS setup if you want to continue down that path Trigger Jenkins job when a S3 file is updated

Recreating/reattaching AWS Lambda console logging to CloudWatch

Thinking that I wanted to clear out old logs, I made the mistake of deleting my Lambda's "Log Stream" on CloudWatch.
The result, as I should have expected if I was awake, is that now CloudWatch isn't getting the Lambda's console logs at all. Oops.
The log group still exists.
I can see how to create a new log stream.
What I haven't been able to find on the web is clear instructions to get the existing Lambda to output to this new stream... ie, to repair what I did.
Can someone provide instructions or a pointer to them, please? I'm sure I'm not the only one who's made this mistake, so I think it's an answer worth having on tap.
UPDATE: Decided to try recovering by creating an entirely new Lambda, running the same code and configured the same way, expecting that it would Just Work; my understanding was that a new Lambda binds to a CloudWatch group automagically.
Then I ran my test, clicked the twist-arrow to see the end of the output, and hit "Click here to view the corresponding CloudWatch log group.". It opened Cloudwatch looking at the expected log group name -- with a big red warning that this group did not exist. Clicking "(Logs)" at the top of the test output gave the same behavior.
I tried creating the group manually, but now I'm back where I was -- lambda runs, I get local log output, but the logs are not reaching CloudWatch.
So it looks like there's something deeper wrong. CloudWatch is still getting logs from the critical lambda (the one driving my newly-released Alexa skill), and the less-critical one (scheduled update for the skill's database) is running OK so I don't absolutely need its logs right now -- but I need to figure this out so I can read them if that background task ever breaks.
Since this is now looking like real Unexpected Behavior rather than user error, I'll take it to the AWS forums and post here if they come up with an answer. On that system, the question is now at https://repost.aws/questions/QUDzF2c_m0TPCwl3Ufa527Wg/lambda-logging-to-cloud-watch-seems-to-be-broken
Programmer's mantra: "If it was easy, they wouldn't need us..."
After a Lambda function is executed, you can go to the Monitoring tab and click View logs in CloudWatch -- it will take you to the location where the logs should be present.
If you know that the function has executed but no logs are appearing, then confirm that your Lambda function has the AWSLambdaBasicExecutionRole assigned to the IAM Role being used by the Lambda function. This grants permission for the Lambda function to write to CloudWatch Logs.
See: AWS Lambda execution role - AWS Lambda

What is the Terraform resource for this AWS console item?

I am looking to add notifications to a build pipeline I am deploying in AWS via Terraform. I cannot seem to locate the resource which creates the status notifications in CodeBuild. Can someone let me know which resource this is?
You’ve not mentioned what sort of notification you are looking to create, so I won’t be able to provide some sample code, however, as per the AWS docs here, you can detect state changes jn CodePipeline using Cloudwatch events.
You can find the Terraform reference for CloudWatch Event Rules here, and you can follow the docs to create a resource that monitors CodePipeline for state changes using CloudWatch Events Rules.

Serverless-ly Query External REST API from AWS and Store Results in S3?

Given a REST API, outside of my AWS environment, which can be queried for json data:
https://someExternalApi.com/?date=20190814
How can I setup a serverless job in AWS to hit the external endpoint on a periodic basis and store the results in S3?
I know that I can instantiate an EC2 instance and just setup a cron. But I am looking for a serverless solution, which seems to be more idiomatic.
Thank you in advance for your consideration and response.
Yes, you absolutely can do this, and probably in several different ways!
The pieces I would use would be:
CloudWatch Event using a cron-like schedule, which then triggers...
A lambda function (with the right IAM permissions) that calls the API using eg python requests or equivalent http library and then uses the AWS SDK to write the results to an S3 bucket of your choice:
An S3 bucket ready to receive!
This should be all you need to achieve what you want.
I'm going to skip the implementation details, as it is largely outside the scope of your question. As such, I'm going to assume your function already is written and targets nodeJS.
AWS can do this on its own, but to make it simpler, I'd recommend using Serverless. We're going to assume you're using this.
Assuming you're entirely new to serverless, the first thing you'll need to do is to create a handler:
serverless create --template "aws-nodejs" --path my-service
This creates a service based on the aws-nodejs template on the provided path. In there, you will find serverless.yml (the configuration for your function) and handler.js (the code itself).
Assuming your function is exported as crawlSomeExternalApi on the handler export (module.exports.crawlSomeExternalApi = () => {...}), the functions entry on your serverless file would look like this if you wanted to invoke it every 3 hours:
functions:
crawl:
handler: handler.crawlSomeExternalApi
events:
- schedule: rate(3 hours)
That's it! All you need now is to deploy it through serverless deploy -v
Below the hood, what this does is create a CloudWatch schedule entry on your function. An example of it can be found over on the documentation
First thing you need is a Lambda function. Implement your logic, of hitting the API and writing data to S3 or whatever, inside the Lambda function. Next thing, you need a schedule to periodically trigger your lambda function. Schedule expression can be used to trigger an event periodically either using a cron expression or a rate expression. The lambda function you created earlier should be configured as the target for this CloudWatch rule.
The resulting flow will be, CloudWatch invokes the lambda function whenever there's a trigger (depending on your CloudWatch rule). Lambda then performs your logic.

How do I add a Lambda Function with an S3 Trigger in CloudFormation?

I've been working with CloudFormation YAML for awhile and have found it to be comprehensive - until now. I'm struggling in trying to use SAM/CloudFormation to create a Lambda function that is triggered whenever an object is added to an existing S3 bucket.
All of the examples I've seen thus far seem to require that you create the bucket in the same CloudFormation script as you create the Lambda function. This doesn't work for me, because we have a design goal to be able to use CloudFormation redeploy our entire stack to different regions or AWS accounts and quickly stand up our application. S3 bucket names must be globally unique, so if I create the bucket in CloudFormation, the script will break when I try to deploy it to a different region/account. I could probably get around this by creating buckets with the account name/region in the name, but that's just not desirable from a bucket sprawl perspective.
So, does anyone have a solution for creating a Lambda function in CloudFormation that is triggered by objects being written to an existing S3 bucket?
Thanks!
This is impossible, according to the SAM team. This is something which the underlying CloudFormation service can't do.
There is a possible workaround, if you implement a Custom resource which would trigger a separate Lambda function to modify the existing bucket and link it to the Lambda function that you want to deploy.
As "implement a Custom Resource" isn't very specific: Here is an AWS github repo with scaffold code to help write it, and then you declare something like the following in your template (where LambdaToBucket) is the custom function you wrote. I've found that you need to configure two things in that function: one is a bucket notification configuration on the bucket (saying tell Lambda about changes), the other is a Lambda Permission on the function (saying allow invocations from S3).
Resources:
JoinLambdaToBucket:
Type: Custom::JoinLambdaToExistingBucket
Properties:
ServiceToken: !GetAtt LambdaToBucket.Arn