AWS - Conditionally run a script on EC2 instances - amazon-web-services

I am looking for a way to conditionally run a script on every existing / new EC2 instances.
For example, in Azure, you can create an Azure Policy that is executed on every existing / new VM, and when a set of conditions apply on that VM, you can deploy a VM extension or run a DSC script.
I am looking for the equivalent service in AWS.

From AWS Systems Manager Run Command - AWS Systems Manager:
Using Run Command, a capability of AWS Systems Manager, you can remotely and securely manage the configuration of your managed instances. A managed instance is any Amazon Elastic Compute Cloud (Amazon EC2) instance or on-premises machine in your hybrid environment that has been configured for Systems Manager. Run Command allows you to automate common administrative tasks and perform one-time configuration changes at scale. You can use Run Command from the AWS Management Console, the AWS Command Line Interface (AWS CLI), AWS Tools for Windows PowerShell, or the AWS SDKs.
Administrators use Run Command to perform the following types of tasks on their managed instances: install or bootstrap applications, build a deployment pipeline, capture log files when an instance is removed from an Auto Scaling group, and join instances to a Windows domain.
You will need to trigger the Run Command to execute on nominated EC2 instances. It will not automatically run for every 'new' instance.
Alternatively, there is Evaluating Resources with AWS Config Rules - AWS Config:
Use AWS Config to evaluate the configuration settings of your AWS resources. You do this by creating AWS Config rules, which represent your ideal configuration settings. While AWS Config continuously tracks the configuration changes that occur among your resources, it checks whether these changes violate any of the conditions in your rules. If a resource violates a rule, AWS Config flags the resource and the rule as noncompliant.
For example, when an EC2 volume is created, AWS Config can evaluate the volume against a rule that requires volumes to be encrypted. If the volume is not encrypted, AWS Config flags the volume and the rule as noncompliant. AWS Config can also check all of your resources for account-wide requirements. For example, AWS Config can check whether the number of EC2 volumes in an account stays within a desired total, or whether an account uses AWS CloudTrail for logging.
You can create an AWS Config custom rule that triggers a process when a non-compliant resource is found. This way, an automated action could correct the situation.

You can also use an AWS managed service such as OpsWorks (Managed Chef/Puppet).
This can give you a way of running the commands in an organized way by allowing you to create defined sets of instances and associated resources.

Related

What is the difference between an AWS Systems Manager Document of Type Automation and Command?

They seem to be serving the same purpose. They can both be broken down into steps, each step being a script.
A Command or Automation document can also both be part of SSM Associations in the State Manager.
So my question is simple. In which case would I need to create a Command document instead of an Automation document ?
From documentation:
Using Run Command, a capability of AWS Systems Manager, you can
remotely and securely manage the configuration of your managed nodes.
So with command documents you are executing commands on your managed instances (i.e. yum update)
Automation, a capability of AWS Systems Manager, simplifies common
maintenance, deployment, and remediation tasks for AWS services like
Amazon Elastic Compute Cloud (Amazon EC2), Amazon Relational Database
Service (Amazon RDS), Amazon Redshift, Amazon Simple Storage Service
(Amazon S3), and many more.
With Automation document you can interact with any AWS service to execute some actions (i.e. launch EC2 instance, crate AMI from running instance, crate RDS snapshot etc.)
Moreover you can define retries, crate process branches (i.e. when some step fails, go different path than when it success)

Is it possible to run the Terrafrom/Ansible scripts without having them on a running EC2 instance?

Background:
We have several legacy applications that are running in AWS EC2 instances while we develop a new suite of applications. Our company updates their approved AMI's on a monthly basis, and requires all running instances to run the new AMI's. This forces us to regularly tear down the instances and rebuild them with the new AMI's. In order to comply with these requirements all infrastructure and application deployment must be fully automated.
Approach:
To achieve automation, I'm using Terraform to build the infrastructure and Ansible to deploy the applications. Terraform will create EC2 Instances, Security Groups, SSH Keys, Load Balancers, Route 53 records, and an Inventory file to be used by Ansible which includes the IP addresses of the created Instances. Ansible will then deploy the legacy applications to the hosts supplied by the Inventory file. I have a shell script to execute the first the Terrafrom script and then the Ansible playbooks.
Question:
To achieve full automation I need to run this process whenever an AMI is updated. The current AMI release is stored in Parameter store and Terraform can detect when there is a change, but I still need to manually trigger the job. We also have an AWS SNS topic to which I can subscribe to receive notification of new AMI releases. My initial thought was to simply put the Terraform/Ansible scripts on an EC2 instance and have a Cron job run them monthly. This would likely work, but I wonder if it is the best approach. For starters, I would need to use an EC2 instance which itself would need to be updated with new AMI's, so unless I have another process to do this I would need to do it manually. Second, although our AMI's could potentially be updated monthly, sometimes they are not. Therefore, I would sometimes be running the jobs unnecessarily. Of course I could simply somehow detect if the the AMI ID has changed and run the job accordingly, but it seems like a better approach would be to react to the AWS SNS topic.
Is it possible to run the Terrafrom/Ansible scripts without having them on a running EC2 instance? And how can I trigger the scripts in response to the SNS topic?
options i was testing to trigger ansible playbook in response to webhooks from alertmanager to have some form of self healing ( might be useful for you)
run ansible in aws lambda and have it frontend with api gaetway as webhook .. alertmanager trigger -> https://medium.com/#jacoelho/ansible-in-aws-lambda-980bb8b5791b
SNS receiver in AWS -> susbscriber-> AWS system manager - which supports ansible :
https://aws.amazon.com/blogs/mt/keeping-ansible-effortless-with-aws-systems-manager/
Alertmanager target jenkins webhook -> jenkins pipline uses ansible plugin to execute playbooks :
https://medium.com/appgambit/ansible-playbook-with-jenkins-pipeline-2846d4442a31
frontend ansible server with a webhook server which execute ansible commands as post actions
this can be flask based webserver or this git webhook provided below :
https://rubyfaby.medium.com/auto-remediation-with-prometheus-alert-manager-and-ansible-e4d7bdbb6abf
https://github.com/adnanh/webhook
you can also use AWX ( ansible tower in opensource form) which expose ansible server as a api endpoint ( webhook) - currently only webhooks supported - github and gitlab.

Installing AWS CLI on EC2 instances via Spinnaker/Terraform

Are there any security considerations in terms of installing the AWS CLI by making as part of baking an image AMI?
I can see the following ways in which AWS CLI can be installed:
1. Via baking image (i.e. making AWS CLI as part of Base AMI itself)
2. Via cloud init
3. Install it as pre-requisite just before your service bootstraps.
I see a strong NO (from internal community) on the above for the reason that the AWS instance (spinnaker managed) can do more than just accessing cloud native resources and is very powerful. So In this case, if we tighten the spinnaker IAM role in which it deploys instances, should it be fine?
It really depends on what are you going to do with the AWS CLI in each EC2 baked instance. IS it for debugging purposes? is it part of the functionality of your system?
if is debugging only you can enable AWS CLI but leave an AWS IAM role with minimum permissions attached. You could have a role with special permissions that you can attach to your desired instances, access the instance and perform your debugging actions.
Other than that it really is not recommended that you use AWs CLI or package installation managers inside instances.

Jenkins: Deploy application to an EC2 instance

I've been working on a cloud formation template for my environment. I end up with a
VPC
Subnet x2
Autoscaling group
Launch configuration (EC2 instances on AWS Linux AMI)
Application load balancer
Codedeploy (for deployments)
But I incurred problem with CodeDeploy configuration with Cloud Formation, as not all features are possible for EC2 instances. After configuring manually CodeDeploy, I get an error while deploying such as "too few unhealthy instances" after which created instances are not destroyed even if rollback is enabled. I'm using right now only one EC2 instance for application, but planning in future to scale.
Is there an alternative for CodeDeploy? I'm interested to trigger deploy from Jenkins Machine.
For above your requirements, I strongly suggest that using aws elastic beanstalk is better way to deploy codes to aws. Because we could manage those in elastic beanstalk and for code deployment, use codeship is also better way to mange deployment integrated with github instead of aws code deployment.
Ensure that you have assigned the correct IAM role for the EC2 instance by going to the "Instance Settings". This will ensure that your deployment occurs smoothly without throwing that error.
You can also configure the deployment to EC2 using CodeDeploy through jenkins.
Steps to follow:
AWS CodeDeploy:
Create a new CodeDeploy application.
Enter a suitable application name and choose "EC2/On premises" as the compute pleatform.
Add a deployment group under the application. For eg: "test".
Choose in-place deployment.
Add service role as "Codedeploy development".
This will allow codedeploy to interact with other AWS services.
Choose a suitable deployment configuration preferably : "OneAtATime"
if deploying to a single EC2 instance.
Environment configuration :
Choose the EC2 instance in which you want to deploy the application
Jenkins:
On Jenkins, create a job with a suitable application name.
In the "Post Build Action" section, click on "Add Post Build Action"
Jenkins - post build configuration
Choose : "Deploy an application to AWS CodeDeploy"
Enter the CodeDeploy and S3 details in the section
S3 bucket will contain all the builds which is used to deploy onto EC2 using Codedeploy

User-data script doens't launch with EC2 Instance

Background:
Services used: ec2, autoscaling, s3, sqs, cloudwatch
AMI and Environement: Windows 64-bit
Network: IAM and security group attached
Job: Run a script which starts a program (.exe) which is loaded from S3
I have an auto scale option that launches a N number of Instances. The user data script is based on aws CLI and few commands in powershell. I was expecting the instances to execute my script upon their initialization. Note that some of the tasks before the Job is to first download the aws CLI using powershell, because the rest of the script is based on aws commands
What am I missing ? I thought the launch should start the script in user-data.
Note that this script was tested on an instance with the same configurations (VPC, Security Group, etc..)