Elastic BeanStalk Application versions update automatically during specific time - amazon-web-services

Elastic Beanstalk application version is updated automatically daily during specific time. Don't how it is getting triggered and source is upload. Can someone suggest how can we find why application version is updated and source is uploaded.
**Version label** **Source**
87f62452d3h673377a1d331502d8f8 class-time-service-hzavks12/program-time-service-87f62452d3h673377a1d331502d8f8-jenkins-hzavk12-branch-deploy-program-time-service-100.zip
-jenkins-hzavk12-branch-deploy-program-time-service-100

To investigate who/what triggers the deployments of your app daily during a specific time can have a look at:
event history in CloudTrial for API calls related to EB.
rules in CloudWatch Events, especially scheduled expressions which cloud execute a lambda-based or CodePiepline deployments of your application.
Jenkins setup. From the file name it seems the deployment zip originates from Jenkins. Maybe Jenkins is performing the deployment on a recurring schedule.

Related

Which is the best way on AWS to set up a CI/CD of a Django app from GitHub?

I have a Django Web Application which is not too large and uses the default database that comes with Django. It doesn't have a large volume of requests either. Just may not be more than 100 requests per second.
I wanted to figure out a method of continuous deployment on AWS from my source code residing in GitHub. I don't want to use EBCLI to deploy to Elastic Beanstalk coz it needs commands in the command line and is not automated deployment. I had tried setting up workflows for my app in GitHub Actions and had set up a web server environment in EB too. But it ddn't seem to work. Also, I couldn't figure out the final url to see my app from that EB environment. I am working on a Windows machine.
Please suggest the least expensive way of doing this or share any videos/ articles you may hae which will get me to my app being finally visible on the browser after deployment.
You will use AWS CodePipeline, a service that builds, tests, and deploys your code every time there is a code change, based on the release process models you define. Use CodePipeline to orchestrate each step in your release process. As part of your setup, you will plug other AWS services into CodePipeline to complete your software delivery pipeline.
https://docs.aws.amazon.com/whitepapers/latest/cicd_for_5g_networks_on_aws/cicd-on-aws.html

AWS ECS run latest task definition

I am trying to have run the lastest task definition image built from GitHub deployment (CD). Seems like on AWS it creates a task definition for example "task-api:1", "task-api:2", on was my cluster is still running task-api: 1 even though there is the latest task as a new image has been built. So far I have to manually stop the old one and start a new one . How can I have it automated?
You must wrap your tasks in a service and use rolling updates for automated deployments.
When the rolling update (ECS) deployment type is used for your service, when a new service deployment is started the Amazon ECS service scheduler replaces the currently running tasks with new tasks.
Read: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html
This is DevOps, so you need a CI/CD pipeline that will do the rolling updates for you. Look at CodeBuild, CodeDeploy and CodePipeline (and CodeCommit if you integrate your code repository in AWS with your CI/CD)
Read: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-ecs-ecr-codedeploy.html
This is a complex topic, but it pays off in the end.
Judging from what you have said in the comments:
I created my task via the AWS console, I am running just the task definition on its own without service plus service with task definition launched via the EC2 not target both of them, so in the task definition JSON file on my Github both repositories they are tied to a revision of a task (could that be a problem?).
It's difficult to understand exactly how you have this set up and it'd probably be a good idea for you to go back and understand the services you are using a little better using the guide you are following or AWS documentation. Pushing a new task definition does not automatically update services to use the new definition.
That said, my guess is that you need to update the service in ECS to use the latest task definition. You can do that in many ways:
Through the console (https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service-console-v2.html).
Through the CLI (https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html).
Through the IaC like the CDK (https://docs.aws.amazon.com/cdk/api/latest/docs/aws-ecs-readme.html).
This can be automated but you would need to set up a process to automate it.
I would recommend reading some guides on how you could automate deployment and updates using the CDK. Amazon provide a good guide to get you started https://docs.aws.amazon.com/cdk/latest/guide/ecs_example.html.

Is it possible to run the Terrafrom/Ansible scripts without having them on a running EC2 instance?

Background:
We have several legacy applications that are running in AWS EC2 instances while we develop a new suite of applications. Our company updates their approved AMI's on a monthly basis, and requires all running instances to run the new AMI's. This forces us to regularly tear down the instances and rebuild them with the new AMI's. In order to comply with these requirements all infrastructure and application deployment must be fully automated.
Approach:
To achieve automation, I'm using Terraform to build the infrastructure and Ansible to deploy the applications. Terraform will create EC2 Instances, Security Groups, SSH Keys, Load Balancers, Route 53 records, and an Inventory file to be used by Ansible which includes the IP addresses of the created Instances. Ansible will then deploy the legacy applications to the hosts supplied by the Inventory file. I have a shell script to execute the first the Terrafrom script and then the Ansible playbooks.
Question:
To achieve full automation I need to run this process whenever an AMI is updated. The current AMI release is stored in Parameter store and Terraform can detect when there is a change, but I still need to manually trigger the job. We also have an AWS SNS topic to which I can subscribe to receive notification of new AMI releases. My initial thought was to simply put the Terraform/Ansible scripts on an EC2 instance and have a Cron job run them monthly. This would likely work, but I wonder if it is the best approach. For starters, I would need to use an EC2 instance which itself would need to be updated with new AMI's, so unless I have another process to do this I would need to do it manually. Second, although our AMI's could potentially be updated monthly, sometimes they are not. Therefore, I would sometimes be running the jobs unnecessarily. Of course I could simply somehow detect if the the AMI ID has changed and run the job accordingly, but it seems like a better approach would be to react to the AWS SNS topic.
Is it possible to run the Terrafrom/Ansible scripts without having them on a running EC2 instance? And how can I trigger the scripts in response to the SNS topic?
options i was testing to trigger ansible playbook in response to webhooks from alertmanager to have some form of self healing ( might be useful for you)
run ansible in aws lambda and have it frontend with api gaetway as webhook .. alertmanager trigger -> https://medium.com/#jacoelho/ansible-in-aws-lambda-980bb8b5791b
SNS receiver in AWS -> susbscriber-> AWS system manager - which supports ansible :
https://aws.amazon.com/blogs/mt/keeping-ansible-effortless-with-aws-systems-manager/
Alertmanager target jenkins webhook -> jenkins pipline uses ansible plugin to execute playbooks :
https://medium.com/appgambit/ansible-playbook-with-jenkins-pipeline-2846d4442a31
frontend ansible server with a webhook server which execute ansible commands as post actions
this can be flask based webserver or this git webhook provided below :
https://rubyfaby.medium.com/auto-remediation-with-prometheus-alert-manager-and-ansible-e4d7bdbb6abf
https://github.com/adnanh/webhook
you can also use AWX ( ansible tower in opensource form) which expose ansible server as a api endpoint ( webhook) - currently only webhooks supported - github and gitlab.

how to deploy code on multiple instances Amazon EC2 Autocaling group?

So we are launching an ecommerce store built on magento. We are looking to deploy it on Amazon EC2 instance using RDS as database service and using amazon auto-scaling and elastic load balancer to scale the application when needed.
What I don't understand is this:
I have installed and configured my production magento enviorment on an EC2 instance (database is in RDS). This much is working fine. But now when I want to dynamically scale the number of instances
how will I deploy the code on the dynamically generated instances each time?
Will aws copy the whole instance assign it a new ip and spawn it as a
new instance or will I have to write some code to automate this
process?
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
A detailed explanation or direction towards some resources on the topic will be greatly appreciated.
You do this in the AutoScalingGroup Launch Configuration. There is a UserData section in the LaunchConfiguration in CloudFormation where you would write a script that is ran when ever the ASG scales up and deploys a new instance.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-as-launchconfig.html#cfn-as-launchconfig-userdata
This is the same as the UserData section in an EC2 Instance. You can use LifeCycle hooks that will tell the ASG not to put the EC2 instance into load until everything you want to have configured it set up.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-as-lifecyclehook.html
I linked all CloudFormation pages, but you may be using some other CI/CD tool for deploying your infrastructure, but hopefully that gets you started.
To start, do check AWS CloudFormation. You will be creating templates to design how the infrastructure of your application works ~ infrastructure as code. With these templates in place, you can rollout an update to your infrastructure by pushing changes to your templates and/or to your application code.
In my current project, we have a github repository dedicated for these infrastructure templates and a separate repository for our application code. Create a pipeline for creating AWS resources that would rollout an updated to AWS every time you push to the repository on a specific branch.
Create an infrastructure pipeline
have your first stage of the pipeline to trigger build whenever there's code changes to your infrastructure templates. See AWS CodePipeline and also see AWS CodeBuild. These aren't the only AWS resources you'll be needing but those are probably the main ones, of course aside from this being done in cloudformation template as mentioned earlier.
how will I deploy the code on the dynamically generated instances each time?
Check how containers work, it would be better and will greatly supplement on your learning on how launching new version of application work. To begin, see docker, but feel free to check any resources at your disposal
Continuation with my current project: We do have a separate pipeline dedicated for our application, but will also get triggered after our infrastructure pipeline update. Our application pipeline is designed to build a new version of our application via AWS Codebuild, this will create an image that will become a container ~ from the docker documentation.
we have two triggers or two sources that will trigger an update rollout to our application pipeline, one is when there's changes to infrastructure pipeline and it successfully built and second when there's code changes on our github repository connected via AWS CodeBuild.
Check AWS AutoScaling , this areas covers the dynamic launching of new instances, shutting down instances when needed, replacing unhealthy instances when needed. See also AWS CloudWatch, you can design criteria with it to trigger scaling down/up and/or in/out.
Will aws copy the whole instance assign it a new ip and spawn it as a new instance or will I have to write some code to automate this process?
See AWS ElasticLoadBalancing and also check out more on AWS AutoScaling. On the automation process, if ever you'll push through with CloudFormation, instance and/or containers(depending on your design) will be managed gracefully.
Plus will it not be an overhead to pull code from git and deploy every time a new instance is spawned?
As mentioned, earlier having a pipeline for rolling out new versions of your application via CodeBuild, this will create an image with the new code changes and when everything is ready, it will be deployed ~ becomes a container. The old EC2 instance or the old container( depending on how you want your application be deployed) will be gracefully shut down after a new version of your application is up and running. This will give you zero downtime.

AWS: How do I continuously deploy a static website on AWS

I have a github repo with static website contents (i.e I try not to use EC2, but the AWS static website service). Now I want to automatically deploy it on AWS anytime I change and push something to the master branch of my github repo.
Any experience or idea doing this?
I do this for many projects by using a Jenkins server - I happen to run it on another ec2 instance, but you could also run it on-premise if you prefer.
Github notifies Jenkins server that a checkin has occurred, and a Jenkins job deploys all the files to the proper places and also notifies me by SMS (or email), that a deployment has occurred.
(Jenkins is not the only tool that can do this there are others).