Commit .elasticbeanstalk/config.yml in Elastic Beanstalk - amazon-web-services

Is it a good approach to commit the .elasticbeanstalk/config.yml inside the git repo of a project which uses eb deploy?
We want to deploy using our CI and so we can not use the interactive eb init.
What we are thinking now is to define our dev, uat and prod inside that config.yml (if possible) and to point to that environment using eb deploy.
We saw that we could perform eb init with all necessary parameters in ebcli version2 but not in version 3 anymore? So it seems the approach is changed?
Can someone explain how to deploy EB for multiple environments, without interaction?

We want to deploy using our CI and so we can not use the interactive eb init
You can suppress the interactive mode as follows:
eb init --platform <platform-name> --region <region-name> <application-name>
Is it a good approach to commit the .elasticbeanstalk/config.yml inside the git repo of a project which uses eb deploy?
Can someone explain how to deploy EB for multiple environments, without interaction?
By design, the EBCLI avoids committing the .elasticbeanstalk/ directory since it can contain developer-specific information, which when committed to VC can cause confusion. So, it's best avoided from VC. You are free to commit it to version control. Ensure there's no sensitive information here. Logs, and saved configurations are usually stored in .elasticbeanstalk/.
You can copy pertinent portions of the .elasticbeanstalk/config.yml file into root-level file from which CI could read information such as the environment name to use.
Locally, you could create a pre-commit Git hook that would read the default environment name from the .elasticbeanstalk/config.yml file into the root-level file -- let's call it .environment_config.sh. It could be a statement as simple as export BEANSTALK_ENVIRONMENT_NAME=<environment name from .elasticbeanstalk/config.yml>
On the CI server:
3.1. Ensure PWD is git init-ed. Systems such as Jenkins usually are git init-ed with the necessary branch, so CI can simply source .environment_config.sh at this point and load the name of the environment to deploy.
3.2. eb init --platform <platform-name> --region <region-name> <application-name>
3.3. eb use $BEANSTALK_ENVIRONMENT_NAME
3.4. eb deploy
(You could combine 3.3. and 3.4. by performing eb deploy $BEANSTALK_ENVIRONMENT_NAME instead; I just wanted to demonstrate the use of eb use)

The EB CLI is really meant to be used from a workstation. I think you'd be better off scripting your CI with the AWS CLI.
A deployment with eb deploy will archive your code in S3 (or CodeCommit), create a new application version then update the environment with the new version label. All of those operations are supported with AWS CLI commands.
Or, you could write your own deployment script in Python with boto3. That's an easy option too. That's basically what the EB CLI is.

Related

Elastic Beanstalk deleting generated files on config changes

On Elastic Beanstalk, with an AWS Linux 2 based environment, updating the Environment Properties (i.e. environment variables) of an environment causes all generated files to be deleted. It also doesn't run container_commands as part of this update.
So, for example, I have a Django project with collectstatic in the container commands:
05_collectstatic:
command: |
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
This collects static files to a folder called staticfiles as part of deploy. But when I do an environment variable update, staticfiles is deleted. This causes all static files on the application to be broken until I re-deploy, which is extremely undesirable.
This behavior did not occur on AWS Linux 1 based environments. The difference appears to be that AWS Linux 2 based environments replace the /var/app/current folder during environment variable changes, where AWS Linux 1 based environments did not do this.
How do I fix this?
Research
I can verify that the container commands are not being run during an environment variable change by monitoring /var/log/cfn-init.log; no new entries are added to this log.
This happens with both rolling update type "disabled" and "immutable".
This happens even if I convert the environment command to be a platform hook, despite the fact that hooks are listed as running when environment properties are updated.
It seems to me like there are two potential solutions, but I don't know of an Elastic Beanstalk setting for either:
Have environment variable changes leave /var/app/current rather than replacing it.
Have environment variable changes run container commands.
The Elastic Beanstalk docs on container commands say "Leader-only container commands are only executed during environment creation and deployments, while other commands and server customization operations are performed every time an instance is provisioned or updated." Is this a bug in Elastic Beanstalk?
Related question: EB: Trigger container commands / deploy scripts on configuration change
The solution is to use a Configuration deployment platform hook for any commands that change the files in the deployment directory. Note that this is different from an Application deployment platform hook.
Using the example of the collectstatic command, the best thing to do is to move it from a container command to a pair of hooks, one for standard deployments and one for configuration changes.
To do this, remove the collectstatic container command. Then, make two identical files:
.platform/confighooks/predeploy/predeploy.sh
.platform/hooks/predeploy/predeploy.sh
Each file should have the following code:
#!/bin/bash
source $PYTHONPATH/activate
python manage.py collectstatic --noinput --ignore *.scss
You need two seemingly redundant files because different hooks have different trigger conditions. Scripts in hooks run when you deploy the app whereas scripts in confighooks run when you change the configuration of the app.
Make sure to make both of these files executable according to git or else you will run into a "permission denied" error when you try to deploy. You can check if they are executable via git ls-files -s .platform; you should see 100755 before any shell files in the output of this command. If you see 100644 before any of your shell files, run git add --chmod=+x -- .platform/*/*/*.sh to make them executable.

How to clone an AWS EB environment across platform branches

Background
Our AWS Elastic Beanstalk environment, running the latest version of the pre-configured "Python 3.7 on 64-bit Amazon Linux 2" platform branch, has a lot of custom configuration and environment properties.
Now we would like to switch this environment to the "Python 3.8 on 64-bit Amazon Linux 2" platform branch.
Basically, the goal is to clone the environment, keeping the current configuration (other than platform branch and version) and environment properties.
Problem
Unfortunately, when cloning, it is not possible to switch between different platform branches (we can only switch between platform versions within the same platform branch).
The documentation suggests that a blue/green deployment is required here. However, a blue/green deployment involves creating a new environment from scratch, so we would still need some other way to copy our configuration settings and environment properties.
Question
What would be the recommended way to copy the configuration settings and/or environment properties from the original environment into a newly created environment?
I suppose we could use eb config to download the original configuration, modify the environment name, platform branch and version, and so on, and then use eb config --update on the new environment. However, that feels like a hack.
Summary
save current config: eb config save <env name>
use a text editor to modify the platform branch in the saved config file
create new environment based on modified config file: eb create --cfg <config name> (add --sample to use the sample application)
if necessary, delete local config files
if necessary, use eb printenv and eb setenv to copy environment properties
EDIT: For some reason the saved config does not include all security group settings, so it may be necessary to check those manually, using the EB console (configuration->instances).
Background
AWS support have confirmed that using eb config is the way to go, and they referred to the online documentation for details.
Unfortunately, the documentation for the eb cli does not provide all the answers.
The following is based on my own adventures using the latest version of the eb cli (3.20.2) with botocore 1.21.50, and documentation at the time of writing (Sep 30, 2021). Note there's a documentation repo on github but it was last updated six months ago and does not match the latest online docs...
eb config
Here's a screenshot from the eb config docs:
Indeed, if you call eb config my-env or eb config my-env --display, environment properties are not shown.
However, this does not hold for eb config save: YAML files created using eb config save actually do include environment properties*.
*Beware, if your environment properties include secrets (e.g. passwords), these also end up in your saved configs, so make sure you don't commit those to version control.
Moreover, it is currently also possible to set environment properties using eb config --update.
This implies we should be able to "copy" both configuration settings and environment properties in one go.
EDIT: After some testing it turns out eb config save does not always get the complete set of environment properties: some properties may be skipped. Not yet sure why... Step 5 below might help in those cases.
Walk-through
Not sure if this is the best way to do it, but here's what seems to work for me:
Suppose we have an existing EB environment called py37-env with lots of custom configuration and properties, running the Python 3.7 platform branch.
The simplest way to "clone" this would be as follows:
Step 1: download the existing configuration
Download the configuration for the existing environment:
eb config save py37-env
By default, the config file will end up in our project directory as .elasticbeanstalk/saved_configs/py37-env-sc.cfg.yml.
The saved config file could look like this (just an example, also see environment manifest):
EnvironmentConfigurationMetadata:
Description: Configuration created from the EB CLI using "eb config save".
DateCreated: '1632989892000'
DateModified: '1632989892000'
Platform:
PlatformArn: arn:aws:elasticbeanstalk:eu-west-1::platform/Python 3.7 running on 64bit Amazon Linux 2/3.3.5
OptionSettings:
aws:elasticbeanstalk:application:environment:
MY_ENVIRONMENT_PROPERTY: myvalue
aws:elasticbeanstalk:command:
BatchSize: '30'
BatchSizeType: Percentage
aws:elb:policies:
ConnectionDrainingEnabled: true
aws:elb:loadbalancer:
CrossZone: true
aws:elasticbeanstalk:environment:
ServiceRole: aws-elasticbeanstalk-service-role
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:autoscaling:launchconfiguration:
IamInstanceProfile: aws-elasticbeanstalk-ec2-role
EC2KeyName: my-key
aws:autoscaling:updatepolicy:rollingupdate:
RollingUpdateType: Health
RollingUpdateEnabled: true
EnvironmentTier:
Type: Standard
Name: WebServer
AWSConfigurationTemplateVersion: 1.1.0.0
Also see the list of available configuration options in the documentation.
Step 2: modify the saved configuration
We are only interested in the Platform, so it is sufficient here to replace 3.7 by 3.8 in the PlatformArn value.
If necessary, you can use e.g. eb platform list to get an overview of valid platform names.
Step 3: create a new environment based on the modified config file
eb create --cfg py37-env-sc
This will deploy the most recent application version. Use --version <my version> to deploy a specific version, or use --sample to deploy the sample application, as described in the docs.
This will automatically look for files in the default saved config folder, .elasticbeanstalk/saved_configs/.
If you get a ServiceError or InvalidParameterValueError at this point, make sure only to pass in the name of the file, i.e. without the file extension .cfg.yml and without the folders.
Step 4: clean up local saved configuration file
Just in case you have any secrets stored in the environment properties.
Step 5: alternative method for copying environment properties
If environment properties are not included in the saved config files, or if some of them are missing, here's an alternative way to copy them (using bash).
This might not be the most efficient implementation, but I think it serves to illustrate the approach. Error handling was omitted, for clarity.
source_env="py37-env" # or "$1"
target_env="py38-env" # or "$2"
# get the properties from the source environment
source_env_properties="$(eb printenv "$source_env")"
# format the output so it can be used with `eb setenv`
mapfile -t arg_array < <(echo "$source_env_properties" | grep "=" | sed -e 's/ =/=/g' -e 's/= /=/g' -e 's/^ *//g')
# copy the properties to the target environment
eb setenv -e "$target_env" "${arg_array[#]}"
This has the advantage that it does not store any secrets in local files.

Deploy app from CircleCI with

I'm looking to automatically deploy my app once we release a new version. We use CircleCI, so firing these commands shouldn't be a big deal.
cf login -a https://api.lyra-836.appcloud.swisscom.com -u myuser -p seret
cf push myapp
However I don't want to expose my personal credentials (Passeport acount) into our git repository. Is it possible to generate an API key for that purpose?
How do you handle that? I might also need to ssh into the instance to fire some migrations scripts after the deployment, same goes there.
Currently Swisscoms Application cloud does not offer technical accounts but you can create an additional account easily. Then add it to your org/space as developer and it should be able to fulfill your needs.
CircleCI documentation has a section about handling secrets: Using CircleCI Environment Variables
Setting environment variables for all commands without adding them to
git
Occasionally, you’ll need to add an API key or some other secret
as an environment variable. You might not want to add the value to
your git history. Instead, you can add environment variables using the
Project settings > Environment Variables page of your project.
This documentation describes how to store encrypted stuff within your VCS.
If you prefer to keep your sensitive environment variables checked
into git, but encrypted, you can follow the process outlined at
circleci/encrypted-files.

Get elastic beanstalk environment variables in docker container

So, i'm trying not to put sensitive information on the dockerfile. A logical approach is to put the creds in the ebs configuration (the GUI) as a ENV variable. However, docker build doesn't seem to be able to access the ENV variable. Any thoughts?
FROM jupyter/scipy-notebook
USER root
ARG AWS_ACCESS_KEY_ID
RUN echo {$AWS_ACCESS_KEY_ID}
I assume that for every deployment you create a new Dockerrun.aws.json file with the correct docker image tag for that deployment. At deployment stage, you can inject environment values which will then be used in docker run command by EB agent. So your docker containers can now access to these environment variables.
Putting sensitive information (for a Dockerfile to use) can be either for allowing a specific step of the image to run (build time), or for the resulting image to have that secret still there at runtime.
For runtime, if you can use the latest docker 1.13 in a swarm mode configuration, you can manage secrets that way
But the first case (build time) is typically for passing credentials to an http_proxy, and that can be done with --build-arg:
docker build --build-arg HTTP_PROXY=http://...
This flag allows you to pass the build-time variables that are accessed like regular environment variables in the RUN instruction of the Dockerfile.
Also, these values don’t persist in the intermediate or final images like ENV values do.
In that case, you would not use ENV, but ARG:
ARG <name>[=<default value>]
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the --build-arg <varname>=<value> flag

AWS Elastic Beanstalk - ERROR: No Application Version named 'v0_9_2-76-gf5a4' found

I'm trying to deploy my code to AWS Beanstalk and get this error. I researched that it could be that the number of versions is more than 500, so I deleted a lot of versions. But, I still get this error.
eb deploy
ERROR: No Application Version named 'v0_9_2-76-gf5a4' found.
I also tried
git aws.push
Error: Failed to create the AWS Elastic Beanstalk application version
Edit:
Trying with eb deploy --debug I now get:
Instance: i-2ad238d5 Module: AWSEBAutoScalingGroup ConfigSet: null Command failed on instance. Return code: 1 Output: Error occurred during build: Command hooks failed . Script /opt/elasticbeanstalk/hooks/appdeploy/pre/10_bundle_install.sh failed with returncode 18
ebcli.objects.exceptions.ServiceError: Update environment operation is
complete, but with errors. For more information, see troubleshooting
documentation.
Did you update the file .elasticbeanstalk/config.yml ? It may have a wrong setup.
Make a backup of .elasticbeanstalk/ folder and remove it
Execute eb create
Select the same region you deployed it before. You can check the region on .elasticbeanstalk/config.yml backup
A list with the environments will appear, select the right one
Deploy now
Remove the .elasticbeanstalk/config.yml backup
Check for the .elasticbeanstalk/config.yml file
environment: CORRECT_ENV_NAME
global:
application_name: CORRECT_APP_NAME
In my case, I was doing eb deploy X where X was an environment for a different project.
When I had the error
InvalidParameterValueError: No Application Version named 'app-9f5c-180927_071528' found.
I fixed this by specifying the label I wanted to push up.
eb deploy XXX-env -l XXX.0.0.1
The -l flag is documented AWS EB Deploy Docs
most likely, the deploy is trying an incorrect Elasticbeanstalk Application. it could be because you renamed the application in the AWS console.
so double check you're pointing to the correct elasticbeanstalk Environment and Application. it could be picking out default values from your .elasticbeanstalk/config.yml file.