How can I automatically install from CodeCommit onto a Raspberry Pi? - amazon-web-services

I want to be able to use AWS CodeCommit as a repo for my scripts, and then have AWS automatically deploy any new commits to a bunch of Raspberry Pi systems (on-premise instances which I've already set up in Systems Manager). Preferably, it would take a commit and install it on a single staging RPi first, test it, and if the tests go well, then install it on the rest of the fleet of RPi systems.
(The Raspberry Pi systems are running Ubuntu Server 20.04 LTS, so are all compatible as per the requirements of Systems Manager)
Is this possible with AWS? Are there any clear guides on how to do this?
The closest I've come to success was following this: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-codecommit.html, but that tutorial explains how to deploy from CodeCommit to an EC2 instance rather than to an on-premises instance. I tried switching to an on-premise instance instead of EC2 (in step 5), and specified the tags I've already assigned to my on-premises instance (in Systems Manager > Fleet Manager), but when I try to run the deployment, I get an error: "The deployment failed because no instances were found for your deployment group. Check your deployment group settings to make sure the tags for your Amazon EC2 instances or Auto Scaling groups correctly identify the instances you want to deploy to, and then try again." The tags are definitely correct so I don't know why that's failing.
Thanks in advance for any help.

Essentially, I had skipped a bunch of steps in the user guide without realising it. Going back to the start after a decent night's sleep helped.
PEBCAK is a thing.

Related

AWS EC2 instances with auto scaling staying in sync

I have a Node.js web application currently running on a single EC2 instance on AWS. I am thinking of using auto scaling with 2 or more EC2 instances since the load on the application is increasing.
I have been trying to understand something with AWS Auto Scaling for a couple hours now but I cant seem to find an answer anywhere.
Currently, at many instances I SSH into my Ubuntu EC2 instance to modify some things or to run a deploy command (which grabs latest code from github). How does this work when you have, let's say 4 instances running under the auto scaling?
So if I SSH into a server and change the server.js file, what happens to the other 3 instances?
If that is not possible what are my choices? I have seen many people seeing that using S3 is the way to keep things in Sync but I don't fully get that. So I have to keep all my source code in S3 and do my edits from there?
You won't be able to modify files directly on the server once they are in an auto-scaling group. Changing something on one server won't be reflected on the other servers, and even if you manually updated all the currently running servers, any servers added by auto-scaling actions will not have those changes.
There are many methods to solve this, for example using AWS Code Deploy.
You could also configure something via an EC2 User-Data script in your auto-scaling configuration which will run on each server when they are created. That script could checkout the latest code from Git, or pull the latest build artifact from S3, and then start the app. When you have an update ready to deploy, you would simply flag the current instances as "unhealthy" and wait for the Auto-Scaling group to automatically replace them with new, updated instances.
You could use AWS EFS to host your application code and all web servers will get content from EFS instead of individual server. This way you don't have to worry about modifying individual server content.
One way you can do it is using github. you can update your code and push it to github and then terminate your existing instances and let the auto-scaling group spin up new instances with the updated code. here is a youtube tutorial video that has detailed steps on how to do it: https://www.youtube.com/watch?v=lB3Ip0Yn-Zs

What is better: configure instance on launch or launch a pre-backed image?

I am developing a cloud solution. I have no experience in it, so I want to ask some professionals about best practices. The current question mostly related to the autoscaling groups functionality.
I've read a lot of howtos and guides and came to conclusion that the only ways to provision/configure instances in ASG are:
to pre-bake AMI;
to use user_data field.
So, let's assume, I have an autoscaling group. And I want to configure instances what it launches, for example, using chef-solo (or ansible-local, but as I understood, chef is better option for the aws).
I see only 2 ways how to do this:
Use packer and pre-bake image locally (using chef-solo provider), then update ASG configuration with the brand-new created AMI;
Use base Amazon AMI and configure images at the launch, using user_data script: install chef-solo, fetch cookbooks from the git, run chef-solo on machine.
What is better choice in you opinion and why? Also I am interested in how to update already running instances in the ASG when my chef cookbooks configuration changes.
Also, if you know better options, leave them here. I am open to discuss.
It depends on your use case.
A pre-baked AMI may be quicker to launch when scaling up, but if you need to make even small changes to the code or configuration, you'll need to bake another AMI. Using user data (whether using straight OS commands or Chef or something else) may take longer if you're installing application servers and deploying applications, and you may also be introducing external dependencies for scaling: what if the GitHub repository is off line or a necessary download is blocked?
So, if speed of scale-up is important, consider a pre-baked AMI. If you can tolerate a reasonable scale-up hit, look at a hybrid approach:
Bake into your AMI the Chef DK and any other large objects you need. For example, you might bake your application server installation into the AMI and then just have Chef configure it through user data.
Make sure your dependencies, scripts and deployables such as WAR files are in reliable repositories such as S3.
The best advice is to try both approaches to get some metrics and see how these fit your use cases.

Build system when using auto scaling group with ELB in aws

I was using a free tier aws account in which I had one ec2 machine (Linux). I have a simple website with backend server running on django at 8000 port and front end server written in angular and running on http (80) port. I used nginx for https and redirection of calls to backend and frontend server.
Now for backend build system, I did these 3 main steps (which I automated by running jenkins on the same machine).
1) git pull (Pull the latest code from repo).
2) Do migrations (Updating my db with any new table).
3) Restarting the django server. (I was using gunicorn).
Now, I split my front end and backend server into 2 different machines using auto scaling groups and I am now using ELB (Aws Elastic Load balancer) to route the requests. I am done with the setup. But now I am having problem in continuous deployment. The main thing is that ELB uses auto scaling groups which in turn uses AMI.
Now, since AMI's are created once, my first question is how to automate this process and deploy my latest code in already running aws servers.
Second, if I want to run few steps just once for all the servers like my second step of updating db with new tables then how to achieve that.
And also third if these steps need to run on a machine, then do I need to have another ec2 instance to automate the process of creating AMI, updating auto scaling groups with it and then deploying latest code in that.
So, basically I want to know the best practices that people follow in deploying latest code in aws machines that were created by auto scaling groups with the help of AMI. Also I use bitbucket for code management.
First Question: how to automate 'package based deployment'.
Instead of creating a new AMI for every release, create a baseline AMI which only changes when your new release require OS changes / security patches / etc. Look into tools such as packer to create AMIs automatically. In order to automate your code deployment when it changes, you can use a package-based deployment approach, which means you create a package for every release (Should be part of your CI process), which is stored in some repository such as Nexus, Artifactory, or even a simple S3 bucket.
When you deploy a new instance of your application, it should run some sort of script to pull and unpack/install that package on the instance < this is the basic concept, there are many tools that can help you achieve this, for example, Chef, or AWS CloudFormation.
So essentially, Step 1 should pull the code, create the package and store it in some repository available to your application servers > this can be done offline.
Second Question: How to run other tasks such as updating database schema.
As mentioned above, this can also be part of your 'deployment' automation, so if you are using Chef or even a simple bash script, it can update a database schema before unpacking the new code, this really depends on your database, how you manage it, and who orchestrates the deployment.
For example, you could have a Jenkins job that pulls the new schema and updates your database when ever you rollout a release.
Your third question can be solved by Packer, it can spin up instances, create an AMI, and terminate the instance.
Read more into CICD, and CICD related tools.

Should multiple ec2 instances share an EFS or should code be downloaded to instance on spin-up?

I am playing around with the idea of having an Auto Scaling Group for my website that receives a lot of traffic. I need each server to be running an identical webservice, so I have come up with several ideas to make this happen.
Idea 1: Use Code Commit + User Data
I will keep my webserver code in a git repo in CodeCommit. Then, when my EC2 instances spin-up, they will install apache2, and then pull from the git repo.
Idea 2: Use Elastic File System
After a server spins up, it will mount to one central EFS that has my webserver code on it. EC2 will install apache2 then use EFS to get the proper php files etc.
Idea 3: Use AWS S3
Like above with apache2, but then download webserver code from s3.
Which option is advised? Why?
I suggest you have a reference machine which is used for creating images. Keep it updated with the latest version of your code and when you are happy with it, create an image out of it, update your launch configuration, and change the ASG configuration so that it uses it. You can then stop the reference machine and leave the job to the ASG instances.

Codedeploy with AWS ASG

I have configured an aws asg using ansible to provision new instances and then install the codedeploy agent via "user_data" script in a similar fashion as suggested in this question:
Can I use AWS code Deploy for pulling application code while autoscaling?
CodeDeploy works fine and I can install my application onto the asg once it has been created. When new instances are triggered in the ASG via one of my rules (e.g. high cpu usage), the codedeploy agent is installed correctly. The problem is, CodeDeploy does not install the application on these new instances. I suspect it is trying to run before the user_data script has finished. Has anyone else encountered this problem? Or know how to get CodeDeploy to automatically deploy the application to new instances which are spawned as part of the ASG?
AutoScaling tells CodeDeploy to start the deployment before the user data is started. To get around this CodeDeploy gives the instance up to an hour to start polling for commands for the first lifecycle event instead of 5 minutes.
Since you are having problems with automatic deployments but not manual ones and assuming that you didn't make any manual changes to your instances you forgot about, there is most likely a dependency specific to your deployment that's not available yet at the time the instance launches.
Try listing out all the things that your deployment needs to succeed and make sure that each of those is available before you install the host agent. If you can log onto the instance fast enough (before AutoScaling terminates the instance), you can try and grab the host agent logs and your application's logs to find out where the deployment is failing.
If you think the host agent is failing to install entirely, make sure you have Ruby2.0 installed. It should be there by default on AmazonLinux, but Ubuntu and RHEL need to have it installed as part of the user data before you can install the host agent. There is an installer log in /tmp that you can check for problems in the initial install (again you have to be quick to grab the log before the instance terminates).