EC2 Grab and save data on auto scaling group - amazon-web-services

How can I make a fresh EC2 instance which is just booted from auto scaling group grab and save sensitive .env data such as db credentials, encryption keys etc'?

You can configure that in user data settings when you are creating a new instance. Check this screenshot below.
You can add a curl command there. Some thing like this.
curl -o <directory you need to copy>/* https://s3.amazonaws.com/bucket-name/test.dev

An option could be the User Data section on the Advanced Details accordion (bottom of the page) during the launch config creation, but I recommend to use git (or similar) to download the config/other sensitive data, because injecting an ENV variable during the startup which contains the name of the branch (which could act also as a stage name) will make the whole stuff to work on environment/stage independent, and you need to maintain only one AMI.
Another option could be a docker image build, and during the startup of the container (or during the image build), you can also inject ENV variables (or mount an Elastic File System to the container) which are containing the config elements.
A third option could be to put together a CloudFormation stack, which will configure the start/update of the stack to pull the config elements to the stack.
The fourth option is to use OpsWorks (Chef) to do the configuration updates on a regular period (cron/service) or by triggered automation (some CI/CD tools like Jenkins, Bamboo, Travis could do this easily).

Related

Is there a service in AWS that is equivalent to docker configs?

I have a WordPress site that is gonna be hosted using ECS in AWS.
To make the management even more flexible, I plan not to store service configurations (i.e. php.ini, nginx.conf) inside the docker image itself. I found that docker swarm offers "docker configs" for such. Are there any equivalent tools doing the same thing? (I know AWS Secrets Manager can handle docker secrets though)
Any advice or alternative approaches? thank you all.
The most similar you could use is probably AWS SSM Parameter store
You will need some logic to retrieve the values when you are running the image.
If you don't want to have the files also inside of the running containers, then you pull from Parameter Store, and add them to the environment, and you will need to do probably some work in the application to read from the environment (the application stays decoupled from the actually source of the config), or you can read directly from Param store in the application (easier, but you have some coupling in your image with Parameter store.
if your concern is only about not having the values in the image, but it is fine if they are inside of the running container, then you can read from Param Store and inject the values in the container inside of the usual location of the files, so for the application is transparent
As additional approaches:
Especially for php.ini and nginx.conf I like a simple approach that is having a separate git repo, with different config files per different environments.
You have a common docker image regardless of the environment
in build time, you pull the proper file for the enviroment, and either save as env variables, or inject in the container
And last: need to mention classic tools like Chef or Puppet, and also ansible. More complex and maybe overkill
The two ways that I store configs and secrets for most services are
Credstash which is combination of KMS and Dynamodb, and
Parameter Store which has already been mentioned,
The aws command line tool can be used to fetch from Parameter Store
and S3(for configs), while credstash is its own utility (quite useful and easy to
use) and needs to be installed separately.

How to apply rolling updates in VM instances instead of using Managed Instance group in GCP?

Problem: I want to apply patch updates in a VM instance which is not a part of a Managed Instance Group. The patch update could be-
A change in the version of the current OS of a VM instance, that is, change from Ubuntu-16-v1 to Ubuntu-16-v2.
An upgrade of the OS boot, that is, changing from Ubuntu-16 OS to Ubuntu-18 OS.
Installation of a new package in the existing machine.
Exploration:
For Problem 1 & 2 stated above
I have explored and tried the rolling update feature present in Managed Instance Group in the Google Cloud Platform and this seems to be a good approach for the problem stated, but what should be the best approach with best practices if someone is not using a Managed Instance Group? You may find the details here.
For Problem 3 stated above
I have tried the Os-patch Management service of GCP but is there any other method that I could use?
Create an "image" from the boot disks of your existing Compute Engine instances.
For updating with newer configurations and software, group images in "image family" which always points to the latest image.
See https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images#setting_families
For your use case, I think you should use IAC script like terraform to recreate similar VMs with the same name, disk, internal address, etc..and call the script from the repo directly on a scheduled date automatically or provide self patch instructions.
Here is the likely process:
Send Email Notification to all the VM owners that Auto-Patch is
scheduled on XYZ.
Email content should include an Instance list going to be
patched/update, list of action, patch team contact details.
An email should also include a link for skipping this auto-update and perform "Self Patching instruction"
documents
Self patching documents should have a command to call autopatch
wrapper script like: "curl -u "encrypted-auth:x-oauth-basic" -k -H 'Accept:
application/vnd.github.VERSION.raw'
'https://github.com/api/v3/repos/xyz/images/contents/gcp/patch_OS_update.sh?ref=master'
|bash -s -- -q"
The above script can also have other options like to query patchset available for particular VM or scan the VM for pending updates

Any way with Jenkins to select Spring Boot profile?

My group is using Jenkins to build and deploy a Spring Boot application to AWS (Elastic Beanstalk).
For development I was selecting the desired profile for the environment I was in (DEV, QA) by setting the spring.profiles.active property in application.properties.
But dev-ops would like to set the property externally. Since the app is being deployed to a SpotInst managed EC2, I have no control over the environment.
I've been doing a lot of searching, but haven't as yet come across anything that fits this situation.
I'm using Gradle for building. The build generates a WAR file and is deployed to Tomcat.
Is there anyway to accomplish this with some sort of parameter substitution?
Thanks.
Our devOps person discovered that we can set environment properties through AWS Beanstalk. By going to Configuration on the beanstalk instance and clicking on the Modify button under Software, scroll down and there is a section for environment variables. He put in spring.profiles.active=qa and that took care of it.
Since we have one beanstalk for each separate environment, it only has to be set once for each.
I am not sure if this will fit into your devOps process but there is very easy and clean way to achieve this.
Configure your application.yaml file like below:
spring:
profiles:
active: ${SPRING_PROFILE:"dev"}
And while starting the application pass this parameter SPRING_PROFILE as command line argument. Note that dev is default value which will be used if the program doesn't find any matching command line argument or environment variable.
You can set the default value to the one you want to use for devOps while for all other purpose where you can control you can pass as argument.

Drupal on Elastic Beanstalk Redirects to Install.php on "eb deploy"

I am encountering what I believe to be permission issues when trying to deploy a Drupal application onto Elastic Beanstalk
I followed this tutorial to get Drupal up and running: http://comm-press.de/en/blog/drupal-climbs-aws-elastic-beanstalk
I am using a Postgres database and I am entering the correct credentials when filling out the forms on install.php, without error.
Any subsequent deploys after the initial deploy brings me back to install.php. After entering in my database information, I get this message, telling me Drupal is already installed (which it is).
http://i.imgur.com/N6KDvvo.png
Why does my site get redirected to install.php after 'eb deploy'? What permissions should I set my drupal folder such that settings.php and /sites/default/files is generated?
The install state is controlled by the DB-- if Drupal bootstraps with no DB information, you are routed to the code that asks you for it.
I was able to bypass this part by setting up a AWS RDS DB and connecting all instances to it.
--But, wait, there's more. Now that having all the instances reading from the same DB has squashed most of the concurrency problems between instances*. Go ahead, try and add a photo to your admin profile. I will wait. Yep, most of the time you'll get the wrong instance and the single image on one instance is not shown on all instances.
I am working on solving that problem with a start up & cron job script that loads updates to resources from the AWS S3 service.
Step A load code into S3
Step B set an accessible timestamp for $lastModified to now()
Step 1 wget/curl a timestamp of the last remote modification ($lastModified)
Step 2 compare the local last updated stamp ($lastUpdated) to remote last modified timestamp
Step 3 if ($lastModified == $lastUpdated) {die} else {update incremental changes && set $lastUpdated = $lastModified}
Watch that first incremental update, it's a doozy.
So... additionally I should mention that we are installing completely vanilla drupal when we instantiate an image, as part of the Docker file from drupal the drupal apache image the last thing the Docker file runs is a setup script.
Elastic Beanstalk sets Environment variables-- some of those variables are the amazon access key id and access secret key.
I curl an IP only available inside of Elastic Beanstalk curl -v 169.254.169.254
From that output in the setup script I can tell if I am local AWS EB or in AWS EB. That allows me to conditionally change certain configurations-- like connecting RDS or a local MySQL DB to Drupal.
The setup script uses aws cli to pull from S3 (sync) to add, replace, update everything in the webroot turning the instance into a setup Drupal installation as far as file level assets go.
sed and service reload is done a lot. Elasticache vs local Redis...
last we start the web server in the foreground && tail -f /dev/null so the container doesn't immediately close.
Drupal is just for static assets pages and a header/menu/footer wrapper for our web app (templates are served..., Twig/JS fills the template in with data). Authentication happens via API-- not even using 90% or so of the goodness in Drupal...
Incremental changes are pulled by comparing hash values and acting to run the update process if they are different.

How does ElasticBeanStalk deploy your application version to instances?

I am currently using AWS ElasticBeanStalk and I was curious as to how (as in internally) it knows that when you fire up an instance (or it automatically does with scaling), to unpack the zip I uploaded as a version? Is there some enviroment setting that looks up my zip in my S3 bucket and then unpacks automatically for every instance running in that environment?
If so, could this be used to automate a task such as run an SQL query on boot-up (instance deployment) too? Are these automated tasks changeable or viewable at all?
Thanks
I don't know how beanstalk knows which version to download and unpack, but running a task on start-up is trivial. Check out cloud-init, a tool written by Ubuntu that's now packaged in Amazon Linux. It allows you to pass arbitrary shell scripts into the UserData section of the instance configuration, and those shell scripts will run on startup.
It's a great way to bootstrap instances on startup, which avoids the soul-sucking misery of managing AMIs.
A quick (possibly non-applicable) warning: If you're running a SQL query on a database that lives on the beanstalk AMI, you're pretty much guaranteed to lose your database at some point. Those machines are designed to be entirely transient. Do not put databases on them. See this answer for more details.
Since your goal seems to be to run custom configuration tasks, the answer is yes, there is a way to do that. You can define custom actions in an .ebextensions file packaged with your app. For example, you can configure a command to run every time a new machine is deployed:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#linux-commands