How can I launch an AWS EB application "single-instance environment"1 using a config file? I'm guessing there's an option I can set in either my config.yml file, or in my .config file in .ebextensions, but I have been googling myself in circles trying to find what the option is called.
Found the documentation on all the options available for .ebextensions files here.
Looks like adding the following to a config file in .ebextenstions will do the trick:
option_settings:
- namespace: aws:elasticbeanstalk:environment
option_name: EnvironmentType
value: SingleInstance
In the documentation you can see that the default for this option is LoadBalanced.
Related
Currently I have a AWS Codecommit repository and an AWS Elastic Beanstalk enviroment in which I upload updates with the EB CLI, using eb deploy.
I have some config files that are ignored in .gitignore, I want to establish a AWS CodePipeline so when I push changes to repository, automatically run the test functions and upload the changes directly to Elastic Beanstalk
I tried implementing a simple pipeline where I push code to CodeCommit and Deploys to Elastic Beantstalk but I get the following error:
2019-09-09 11:51:45 UTC-0500 ERROR "option_settings" in one of the configuration files failed validation. More details to follow.
2019-09-09 11:51:45 UTC-0500 ERROR You cannot remove an environment from a VPC. Launch a new environment outside the VPC.
2019-09-09 11:51:45 UTC-0500 ERROR Failed to deploy application.
This is the *.config file that isn't in Codecommit
option_settings:
aws:ec2:vpc:
VPCId: vpc-xxx
Subnets: 'subnet-xxx'
aws:elasticbeanstalk:environment:
EnvironmentType: SingleInstance
ServiceRole: aws-xxxx
aws:elasticbeanstalk:container:python:
WSGIPath: xxx/wsgi.py
aws:elasticbeanstalk:healthreporting:system:
SystemType: enhanced
aws:elasticbeanstalk:application:environment:
DJANGO_SETTINGS_MODULE: xxxxsettings
SECRET_KEY: xxxx
DB_NAME: xxxx
DB_USER: xxxx
DB_PASSWORD: xxxx
DB_HOST: xxx
DB_PORT: xxxx
aws:autoscaling:launchconfiguration:
SecurityGroups: sg-xxx
I noticed some syntax that is a little different from the above:
Subnets: value has '' around them, could this be causing the issue and if you have this here, are '' supposed to be around the other values ?
From the config file it's look like that you are using single instance. For Single instance you don't need to specify autoscaling launch configuration. Just remove the last two line it will work fine.
I think from what I have been reading is that I should not commit my config files, but add them in CodeBuild so it generates the .zip file that would be deployed to ElasticBeanstalk.
I just followed this tutorial to learn how to use eb command.
One thing I want to do is to modify the Health Check Type of the auto scaling group created by Elastic-Beanstalk to ELB. But I just can't find how to do it.
Here's what I have done:
Change the Health Check Type of the environment dev-env to ELB through the AWS console.
Use eb config save dev-env --cfg my-configuration to save the configuration file locally.
The ELB health check type doesn't appear inside .elasticbeanstalk/saved_configs/my-configuration.cfg.yml file. This means that I must specify the health check type somewhere else.
Then I find another article saying that you can put the health check type inside .ebextensions folder.
So I make a modification to eb-python-flask, which is the example of the tutorial.
Here's my modification of eb-python-flask.
I thought that running eb config put prod, and eb create prod2-env --cfg prod with my eb-python-flask would create an environment whose health-check-type of the auto scaling group is ELB. But I was wrong. The health check type created by the eb commands is still EC2.
Anyone know how to set the health check type programmatically?
I don't want to set it through AWS console. It's inconvenient.
An ebextension like the below will do it:
Resources:
AWSEBAutoScalingGroup:
Type: "AWS::AutoScaling::AutoScalingGroup"
Properties:
HealthCheckType: ELB
HealthCheckGracePeriod: 300
I use the path .ebextensions/autoscaling.config
eb create prod3-env --cfg prod command uses git HEAD version to create a zip file to upload to elastic beanstalk.
This can be discovered through eb create --verbose prod3-env --cfg prod command, which shows you a verbose output.
The reason I failed to run my own configuraion is that I didn't commit the config file to git before running eb create prod3-env --cfg prod.
After committing the changes of the code, I successfully deployed an Auto Scaling Group whose Health Check Type is ELB.
I'm running a single Docker container on Elastic Beanstalk using its Single Container Docker Configuration, and trying to send the application stdout to CloudWatch using the awslogs logging driver.
EB looks for a Dockerrun.aws.json file for the configuration of the container, but as far as I can see doesn't have an option to use awslogs as the container's logging driver (or add any other flags to the docker run command for that matter).
I've tried hacking into the docker run command using the answer provided here, by adding a file .ebextensions/01-commands.config with content:
commands:
add_awslogs:
command: 'sudo sed -i "s/docker run -d/docker run --log-driver=awslogs --log-opt awslogs-region=eu-west-2 --log-opt awslogs-group=dockerContainerLogs -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
This works, in the sense that it modifies the run script, and logs show up in CloudWatch.
But the EB application dies. The container is up, but does not respond to requests.
I find the following error in the container logs:
"logs" command is supported only for "json-file" and "journald" logging
drivers (got: awslogs)
I find answers to similar questions relating to ECS (not EB) suggesting to append ECS_AVAILABLE_LOGGING_DRIVERS with awslogs. But I don't find this configuration setting in EB.
Any thoughts?
I'm posting here the answer I received from AWS support:
As Elastic Beanstalk Single Container environment will save the stdout
and stderr on /var/log/eb-docker/containers/eb-current-app/ by
default, and as the new solution stack allows you the option to stream
log to cloudwatch, automating the configuration of the AWSLogs agent
on the instances, what I recommend to do is to add an ebextension to
add the stdout and stderr logs files to the cloudwatch configuration
and use the already configured agent to stream those files to
cloudwatch logs. instead of touching the pre-hooks , which is nor
supported by AWS as hooks may change from solution stack version to
another.
Regarding the error you are seeing "logs" command is supported only
for "json-file" and "journald" logging drivers (got: awslogs)" this
error is from how docker works, when it is configured to send logs to
other driver beside json-file or journald it will not be able to
display logs locally as it does not have a local copy of them.
### BEGIN .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 7
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[docker-stdout]
log_group_name=/aws/elasticbeanstalk/environment_name/docker-stdout
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
### END .ebextensions/logs.config
I was able to expand on the previous answer for a multi container elastic beanstalk environment as well as inject the environment name. I did have to grant the correct permission in the ec2 role to be able to create the log group. You can see if it is working by looking in:
/var/log/awslogs.log
this goes in .ebextensions/logs.config
option_settings:
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: StreamLogs
value: true
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: DeleteOnTerminate
value: false
- namespace: aws:elasticbeanstalk:cloudwatch:logs
option_name: RetentionInDays
value: 14
files:
"/etc/awslogs/config/stdout.conf":
mode: "000755"
owner: root
group: root
content: |
[/var/log/containers/docker-stdout]
log_group_name=/aws/elasticbeanstalk/`{ "Ref" : "AWSEBEnvironmentName" }`/docker-stdout.log
log_stream_name={instance_id}
file=/var/log/containers/*-stdouterr.log
commands:
"00_restart_awslogs":
command: service awslogs restart
If I apply a setting in two config files in the .ebextensions folder does the last file override the setting in the first file?
For example take two files with instance role setting defined:
.ebextensions/0001-base.config
option_settings:
IamInstanceProfile: aws-ec2-role
.ebextensions/0010-app.config
option_settings:
IamInstanceProfile: aws-app-role
Which role will the Beanstalk EC2 instance be given? aws-ec2-role or aws-app-role?
.ebextensions are executed in alphabetical order so aws-app-role would be the final result for your IamInstanceProfile option setting.
Your syntax for the .ebextensions would cause a compilation error if you tried to deploy them, here is the correct way to do what you want.
option_settings:
"aws:autoscaling:launchconfiguration":
IamInstanceProfile: aws-app-role
I'm trying to migrate an existing cluster of processing workers on the back of an SQS queue to be deployed using elastic beanstalk. Is there a way using the eb cli to specify the queue either by name or id?
My current command looks like this:
eb create -t worker -k my-key
I know it is possible in the UI but that's not going to work with our CI pipeline:
You can't do that with an eb cli option, but you can use a config file inside .ebextensions directory. See example:
option_settings:
- namespace: aws:elasticbeanstalk:sqsd
option_name: WorkerQueueURL
value: YOUR-QUEUE-URL
Here you can see related discussion: https://forums.aws.amazon.com/thread.jspa?messageID=706191
Here is documentation for the option: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html#command-options-general-elasticbeanstalksqsd