Automating Git pull process on a ec2 ubuntu instances - amazon-web-services

I am running a couple of ubuntu ec2 instances, I want to run an automation script which will pull the code from Github whenever a new instance is booted from the AMI. The thing is presently I am sshing to the server and run the command git pull origin master and it will ask for password key.
How do I automate this process? So after booting the new instance from a AMI this script should:
Run
Pull the code and also the submodule
Create couple of files and configure it
Please help me to achieve it.
Thanks

This will probably take some time and configuring, but this might set you on the right path.
First, setup your ssh keys, so that you can automatically pull from a repo, without a password. Outlined here: https://help.github.com/articles/generating-ssh-keys
Next, create a startup script to issue the 'pull' command from Github. Here: https://help.ubuntu.com/community/UbuntuBootupHowto
Then save your AMI, When you start a new EC2 AMI, the script should run, pulling in your Github changes.
Also to note, make sure gits remote path is using SSH, if it is HTTPS, it will ALWAYS ask for a password.

Your best best would be to utilize the fact the Ubuntu utilizes CloudInit within its canonical image.
Using CloudInit, you can pass scripts (i.e. shell scripts) to execute at various start up stages as EC2 user-data.
It would be very easy for your to make your GIT command line sequence execute from such a script. He is link to documentation, which includes examples.
https://help.ubuntu.com/community/CloudInit

Create a user-password access to your ubuntu instance. Replicate this particular instance if you need multiple. Now you are free of the key access. If you need to automate a process in that instance cron it or send the script via ssh to that instance and let the cron to find and run it.

Related

Execute shell scrip from Git in specific EC2 instance

I have a shell script. Per rule in my organization we should save all the code in GitHub and use it from there. How can I run this script in a specific set of EC2 instances from GitHub using Jenkins?
You may consider the use of AWS Systems Manger to run shell commands on your EC2 instance.
https://docs.aws.amazon.com/systems-manager/latest/userguide/execute-remote-commands.html
You can find a sample use case here: https://ujjwalbhardwaj.me/post/trigger-actions-inside-ec2-instance-on-cloudwatch-events-using-aws-systems

AWS - Additional SSH User with cloud-init. No Reboot

I am able to successfully provision linux ssh users via cloud-init upon initial startup using this guide: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-user-account-cloud-init-user-data/
The issue I am facing is that every time I need to add or remove users, I need to shut down the machine to modify to user-data script. How would I go about creating and removing users while reducing downtime?
If I manually add users (https://aws.amazon.com/premiumsupport/knowledge-center/new-user-accounts-linux-instance/), will those users be removed when the instance restarts and runs the startup script?
You could potentially use EC2 launch templates. Very easy to setup and easy to clone to different instance sizes and classes. Pretty slick how I use them with user data to control environment variables for dev, stage, prod use too!

Bitbucket and AWS EFS

Hope someone can help me with this.
I have set up an EC2 with EFS mounted to it.
The EFS basically contains the WordPress source code.
What is the best way to set up an automated process to update the source code inside the EFS from a bitbucket repository.
I'm able to ssh to the server and run git pull (a bit slow), but I want to set up an auto-scaler later, and I want a way to deploy the code changes directly into the EFS without doing the SSh on the ec2 instance.
Thanks
Easiest way, have a cron job that runs git pull every day, or whatever frequency you want.
If you want to be fancier, then have it run git fetch and then depending on the changes, run a pull.
If you want to be secure, create a read-only bitbucket account so if someone infiltrates your EC2 host they cant mess up your repo.
If you want to be able to monitor it, you can make the script ping AWS CloudWatch and add alarms so if your cron fails, you can set it back up.

Should multiple ec2 instances share an EFS or should code be downloaded to instance on spin-up?

I am playing around with the idea of having an Auto Scaling Group for my website that receives a lot of traffic. I need each server to be running an identical webservice, so I have come up with several ideas to make this happen.
Idea 1: Use Code Commit + User Data
I will keep my webserver code in a git repo in CodeCommit. Then, when my EC2 instances spin-up, they will install apache2, and then pull from the git repo.
Idea 2: Use Elastic File System
After a server spins up, it will mount to one central EFS that has my webserver code on it. EC2 will install apache2 then use EFS to get the proper php files etc.
Idea 3: Use AWS S3
Like above with apache2, but then download webserver code from s3.
Which option is advised? Why?
I suggest you have a reference machine which is used for creating images. Keep it updated with the latest version of your code and when you are happy with it, create an image out of it, update your launch configuration, and change the ASG configuration so that it uses it. You can then stop the reference machine and leave the job to the ASG instances.

Autoscaling EC2: Launch Webserver on Spun-up Instance

I seem to not understand one central point of AWS Autoscaling:
I created an AMI of an (Ubuntu) EC2 instance having my web-server installed, and I use this AMI as the launch-configuration for my Autoscaling group.
But when Autoscaling decides to Spin up a new instance, how am I supposed to launch my webserver on that instance. Am I supposed to write some start-up scripts or what is best practice to start a process of a newly spun-up instance from Autoscaling?
When I deploy an application (PostgreSQL, Elasticsearch, whatever) to an EC2 instance, I usually do it intending on being able to repeat the process. So my first step is to create an initial deployment script which will do as much of the install and setup process as possible, without needing to know the IP address, hostname, amount of memory, number of processors, etc. Basically, as much as I can without needing to know anything that can change from one instance to the next or upon shutdown / restart.
Once that is stable, I create an AMI of it.
I then create an initialization script which I use in the launch-configuration and have it execute that script on the previously created AMI.
That's for highly configured applications. If you're just going with default settings (e.g. IP address = 0.0.0.0), than yes, I would simply set 'sudo update-rc.d <> defaults 95 10', so that it runs on startup.
Then create the AMI. When you create a new instance from that AMI, the webserver should startup by default. If it doesn't, I would look at whether or not you really set the init.d script to do so.
Launching a new instance from an AMI should be no different than booting up a previously shutdown instance.
By the way, as a matter of practice when creating these scripts, I also do a few things to make things much cleaner for me:
1) create modules in separate bash scripts (e.g. creating user accounts, set environment variables, etc) for repeatability
2) Each deployment script starts by downloading and installing the AWS CLI
3) Every EC2 instance is launched with an IAM role that has S3 read access, IAM SSH describe rights, EC2 Address allocation/association, etc.
4) Load all of the scripts onto S3 and then have the deployment / initialization scripts download the necessary bash module scripts, chmod +x and execute them. It's as close to OOP I can get without overdoing it, but it creates really clean bash scripts. The top level launch / initialization scripts for the most part just download individual scripts from S3 and execute them.
5) I source all of the modules instead of simply executing them. That way bash shares variables.
6) Make the linux account creation part of the initialization script (not the AMI). Using the CLI, you can query for users, grep for their public SSH key requested from AWS, create their accounts and have it ready for them to login automagically.
This way, when you need to change something (i.e. change the version of an application, change a configuration, etc.) you simply modify the module script and if it changes the AMI, re-launch and re-AMI. Otherwise, if it just just changes the instance-specifically, than just launch the AMI with the new initialization script.
Hope that helps...