We have two Jenkins pipeline ; one called log monitoring and another as alert trigger. Both of these pipelines get triggered on any changes to the terraform scripts in Bitbucket.
The pipeline works fine, and the AWS resources are getting created successfully.
The problem that we are facing here is :
The log monitoring pipeline creates an AWS resource, whose ARN we want to fetch from the AWS console and use in the alert trigger pipeline.
Any thoughts how we can achieve this as we want to automate all the pipeline instead of manually fetching the ARN and triggering the downstream pipeline.
You can attach tags to your existing resources and use those to retrieve their respective ARNs using AWS CLI in shell scripts. From that output you can further use that ARN value inside your pipeline in a dynamic manner.
Related
so im trying to run Terraform through CodePipeline. I need to manage a fleet of clusters. It seems CodePipeline is one of the good ways to trigger certain pipelines on some conditions.
I have a very simple requirement - i want to see the terraform execution in real time. i want to expose the CodePipeline run in a way that i can stream this. Is this where EventBridge is used. I tried to look at an EventBridge example here - https://medium.com/hackernoon/monitoring-ci-cd-pipelines-with-amazon-eventbridge-32177e2f2c3e - but it doesnt seem to be streaming run output in real time.
Which event or hook to should i attach to? And is CodePipeline even the right thing to use here ?
Which event or hook to should I attach to?
You're looking at the wrong AWS service. EventBridge is not for streaming log output. It is for discrete events, not a stream.
Your CodePipeline would be using a CodeBuild task to execute Terraform. Your CodeBuild task will be configured to log to AWS CloudWatch Logs. You can view the CloudWatch Logs output in the AWS CloudWatch web console, with the option to poll for new log output.
You can also do the same in a command line console with the aws logs tail command, documented here.
To do the same thing in your own code you would have to write your code to poll the CloudWatch Logs API in an loop.
And is CodePipeline even the right thing to use here?
Yes absolutely
What is the easiest way to trigger an Elastic Beanstalk redeployment from within AWS?
The goal is to just reboot all instances, but in an orderly manner, according to ALB / target group rules.
Currently I am doing this locally via the EB shell by calling eb deploy without doing any code changes. But rather than doing this manually on a regular basis, I want to use CloudWatch jobs to trigger it with a schedule.
One way would be to setup CloudWatch Schedule Expressions rule.
The rule would trigger a lambda based on your per-defined schedule. The lambda can be as simple as to only trigger the re-deployment of the existing application:
import json
import boto3
eb = boto3.client('elasticbeanstalk')
def lambda_handler(event, context):
response = eb.update_environment(
ApplicationName='<your-eb-app-name>',
EnvironmentName='<your-eb-env-name>',
VersionLabel='<existing-label-of-application-version-to-redeply')
print(response)
You could customize the lambda to be more useful, e.g. by parametrizing it instead of hard-codding all the names required for update_environment.
The lambda execution role also needs to be adjusted to allow the actions on EB.
The other option would be to use CodePipline with two stages:
Source S3 where you specify the zip with the application version to deploy. Its bucket must be versioned.
Deploy stage with Elastic Beanstaslk provider.
The pipeline would also be triggered by the CloudWatch rule on a schedule.
There is actually a feature called Instance Refresh that replaces the instances without deploying a new app version.
Triggering that via a Lambda function scheduled via CloudWatch Jobs seems to be the easiest and cleanest way for my use case. However, keep in mind that replacing instances is not the same are rebooting / redeploying, for example when it comes to managing burst credits.
This AWS blog post described how to set up a scheduled instance refresh with AWS Lambda.
Background:
I am trying to generate patch compliance data report in quicksight. In order to do it I am using terraform I have added all inventory data in S3 bucket.
I have created Athena automation document which creates database/tables in Athena using S3 bucket data. Now I want to add some terraform code which execute automation document daily on scheduled time.
For more information about this task: https://reinvent2019.awsmanagement.tools/mgt410/en/cont.html
Problem:
I can create maintenance window to define crone job for automation task but I do not have target to add.
My Athena Automation script is only creating/updating database in the Athena.There is no role of target here.
Can someone guid me on this issue?
Thank you in advance
You can create a CloudWatch Event that triggers on schedule and calls Lambda function, that in turn invokes you Athena logic. Here is the good example: https://thedataguy.in/automate-aws-athena-create-partition-on-daily-basis/
Note on QuickSight - if you are using Spice, instead of direct query - you need to manage Spice rebuild too. Which might be tricky... The default setting only allow for once-a-day rebuild on schedule.
I am looking to add notifications to a build pipeline I am deploying in AWS via Terraform. I cannot seem to locate the resource which creates the status notifications in CodeBuild. Can someone let me know which resource this is?
You’ve not mentioned what sort of notification you are looking to create, so I won’t be able to provide some sample code, however, as per the AWS docs here, you can detect state changes jn CodePipeline using Cloudwatch events.
You can find the Terraform reference for CloudWatch Event Rules here, and you can follow the docs to create a resource that monitors CodePipeline for state changes using CloudWatch Events Rules.
We are in the process of automating the launch of on demand EMR clusters. This will be triggered upon the arrival of certain files in AWS S3. In this regard, we are evaluating two options -
1. Shell script that will invoke a AWS CLI to launch the desired EMR cluster
2. Python script that will invoke methods for EMR start, stop using the boto3
Is there any preference of using one option over the other?
The former appears easier, as we can take the CLI from the manually created EMRs from the AWS console and package it into a shell script. While the later option has intricacies and doesn't have such a starting point and the methods would have to be written from scratch.
Appreciate your inputs in this regard.
While both can achieve what you want, I would suggest to go with Lambda (Python).
Create an event trigger on the S3 location where data is expected - this will invoke your lambda (python code) and lambda can in-turn launch your EMR.
s3-> lambda -> EMR
Another option could be to trigger a data pipeline from lambda which will create the EMR for you.
s3 -> lambda -> pipeline -> EMR
Advantages of using pipeline vs lambda to create EMR
GUI based: You can pick and choose the components needed like resources, activites, schedules etc.
Minimal Python: In the lambda you will just configure the pipeline to be triggered, you don't need to implement error handling, retries, success or failure emails etc. All of this is inbuilt in the pipelines
Flexible: Since pipeline components are modular and configurable, you can change any configuration quickly. Code changes often takes more time.
You can read more about it here - https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/what-is-datapipeline.html