How to install and enable a service in amazon Elastic Beanstalk? - amazon-web-services

I'm banging my head against a wall trying to both install and then enable a service in elastic beanstalk. What I want to do is:
Install a service in /etc/init.d that points to my python app in /opt/python/current/app/
Have Elastic Beanstalk start and keep-alive the service, as specified in an .ebextensions/myapp.config file.
(Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services)
Here's my .ebextensions/myapp.config file:
container_commands:
01_copy_service:
command: "cp /opt/python/ondeck/app/my_service /etc/init.d/"
02_chmod_service:
command: "chmod +x /etc/init.d/my_service"
services:
sysvinit:
my_service:
enabled: true
ensureRunning: true
files : [/etc/init.d/my_service]
This fails because services are run before container_commands. If I comment out services, deploy, then uncomment services, then deploy again, it will work. But I want to have a single-step deploy, because this will be an auto-scaling node.
Is there a solution? Thanks!

Nate, I have the exact same scenario as you and I solved it this way:
Drop the "services" section and add a "restart" command.
container_commands:
...
03_restart_service:
command: /sbin/service my_service restart

You can cause the service to restart after a command is run by using a commands: key under the services: key. The documentation for the services: key is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services

I haven't done it myself, but I want to give you some ideas which should work. It's just the matter of convenience and the workflow.
Since it is not really application file, but rather EC2 file, and unlikely to be changed often, you can do one of the following:
Use files content to create the service init script. You can even have a specific config file just for that script.
Store service init script on S3 and copy the contents with command.
Create dummy service script, replace the contents with the one from deployment with container command and dependency on the above command to the service.
(this one is heavy) Create custom AMI and specify it in Autoscaling configuration.
Hope it helps.

Related

Unable to deploy custom docker image to AWS ECS using Terraform

I am using terraform to build infrastructure on AWS provider. I am using ECR to push my local docker images using AWSCLI.
Now, I have a Application load balancer which would route traffic to ECS_service. I want ECS to manage my Docker Containers using Fargate. But, the docker containers are exited by saying "Essential Docker container exited".
Thats the only log printed out.
If i change the docker image to be nginx:latest(which is fetched from dockerhub). It works.
PS: My docker container is a simple node application with node:alpine as base image. Is it something related to this, i am wrong !
Can anyone provide me with some insight on what is wrong with my approach.
I get the following error in AWS Logs:
docker-standard-init-linux-go211-exec-user-process-caused-exec-format-error.
My Dockerfile
FROM node:alpine
WORKDIR /app
COPY . .
RUN npm install
# Expose a port.
EXPOSE 8080
# Run the node server.
ENTRYPOINT ["npm", "start"]
They say, its issue with the start script. I am just running this command. npm start to start the server.
It’s not your approach, your image is just not working.
Try running it locally and see the output otherwise you will need to ship the logs to Cloudwatch and see what they say

Create systemd service in AWS Elastic Beanstalk on new Amazon Linux 2

I'm currently trying to create a worker on AWS Elastic Beanstalk which is pulling messages from a specific SQS queue (with the help of the Symfony messenger). I don't want to use dedicated worker instances for this task. After some research, I found out that systemd can help here which is enabled by default on the new Amazon Linux 2 instances.
However, I'm not able to create a running systemd service. Here is my .ebextensions/03_workers.config file:
files:
/etc/systemd/system/my_worker.service:
mode: "000755"
owner: root
group: root
content: |
[Unit]
Description=My worker
[Service]
User=nginx
Group=nginx
Restart=always
ExecStart=/usr/bin/nohup /usr/bin/php /var/app/current/bin/console messenger:consume integration_incoming --time-limit=60
[Install]
WantedBy=multi-user.target
services:
systemd:
my_worker:
enabled: "true"
ensureRunning: "true"
I can't see my service running if I'm running this command:
systemctl | grep my_worker
What am I doing wrong? :)
systemd is not supported in Services. The only correct is sysvinit:
services:
sysvinit:
my_worker:
enabled: "true"
ensureRunning: "true"
But I don't think it will even work, as this is for Amazon Linux 1, not for Amazon Linux 2.
In Amazon Linux 2 you shouldn't be even using much of .ebextensions. AWS docs specifically write:
On Amazon Linux 2 platforms, instead of providing files and commands in .ebextensions configuration files, we highly recommend that you use Buildfile. Procfile, and platform hooks whenever possible to configure and run custom code on your environment instances during instance provisioning.
Thus, you should consider using Procfile which does basically what you want to achieve:
Use a Procfile for long-running application processes that shouldn't exit. Elastic Beanstalk expects processes run from the Procfile to run continuously. Elastic Beanstalk monitors these processes and restarts any process that terminates. For short-running processes, use a Buildfile.
Alternative
Since you already have created a unit file /etc/systemd/system/my_worker.service for systemd, you can enable and start it yourself.
For this container_commands in .ebextensions can be used. For example:
container_commands:
10_enable_worker:
command: systemctl enable worker.service
20_start_worker:
command: systemctl start worker.service
It's not officially documented, but you can use a systemd service in Amazon Linux 2.
A block like the following should work:
services:
systemd:
__SERVICE_NAME__:
enabled: true
ensureRunning: true
Support for a "systemd" service is provided by internal package /usr/lib/python3.7/site-packages/cfnbootstrap/construction.py which lists recognized service types: sysvinit, windows, and systemd
class CloudFormationCarpenter(object):
_serviceTools = {"sysvinit": SysVInitTool, "windows": WindowsServiceTool, "systemd": SystemDTool}
Note that a systemd service must support chkconfig and in particular your launch script at /etc/init.d/__SERVICE_NAME__ must include a "chkconfig" and "description" line similar to:
# chkconfig: 2345 70 60
# description: Continuously logs Nginx status.
If you don't support chkconfig correctly then chkconfig --list __SERVICE_NAME__ will print an error, and attempting to deploy to Elastic Beanstalk will log a more detailed error in /var/log/cfn-init.log when it tries to start the service.

Elastic Beanstalk has stopped logging to CloudWatch

I'm posting this as a question and answer, as I think many people may experience the same issue.
I've had an Elastic Beanstalk instance streaming logs to CloudWatch for over a year. Recently, my logs stopped appearing in CloudWatch. The Elastic Beanstalk instance is logging to its container correctly, and I have made no changes to my logging configurations.
TLDR;
The most recent Java/Tomcat8 Elastic Beanstalk environment includes boto3. My .ebextensions/sshd.config file installs boto3, and this was causing a conflict. Removing the boto3 line from my sshd.config solved the issue.
Full Answer
I started my project as a Java/Tomcat Codestar template. This project includes the file .ebextensions/sshd.config which looks like this:
packages:
yum:
python27-devel: []
python27-pip: []
gcc: []
python:
pycrypto: []
boto3: []
files:
"/usr/local/bin/get_authorized_keys" :
mode: "000755"
owner: root
group: root
source: https://s3.amazonaws.com/awscodestar-remote-access-us-east-1/get_authorized_keys
commands:
01_update_ssh_access:
command: >
sed -i '/AuthorizedKeysCommand /s/.*/AuthorizedKeysCommand \/usr\/local\/bin\/get_authorized_keys/g' /etc/ssh/sshd_config &&
sed -i '/AuthorizedKeysCommandUser /s/.*/AuthorizedKeysCommandUser root/g' /etc/ssh/sshd_config &&
/etc/init.d/sshd restart
The script installs python and boto3 into the Elastic Beanstalk instance.
Through AWS support I found out the latest Java/Tomcat environment already includes boto3, and trying to install this on top of the existing environment was causing a problem. This ultimately resulted in the instance logs failing to stream to CloudWatch.
The solution was simply to remove the explicit install of boto3 in the sshd.config, then rebuild the Elastic Beanstalk instance and redeloy my app.
I just tried creating a new Codestar Java/Tomcat template project, and the boto3 install line is still in the sshd.config.
I have not checked other types of project to see if the same sshd.config is included in the Codestar template. I assume this is probably not specific to the Java/Tomcat environment.

Elastic Beanstalk: How would I run an ebextension command on the worker tier only?

I have an elastic beanstalk application that utilises both the web tier and the worker tier. Jobs are offloaded onto the worker tier from the web tier via SQS to keep the web-facing servers speedy. Both environments use the exact same codebase, and use an RDS instance under them.
I need to run a cron job on the leader server of the worker tier. I've created a .ebextensions folder with a file called crontab in it as follows (it's a Laravel web app):
* * * * * root php /var/www/html/artisan do:something:with:database
Then, I've created a file called 01cronjobs.config, which updates the environments crontab under root as follows:
container_commands:
01_remove_old_cron_jobs:
command: "crontab -r || exit 0"
02_cronjobs:
command: "cat .ebextensions/crontab | crontab"
leader_only: true
.. all good. Now, I want to deploy this to EB using the eb deploy command. However, I only want the worker tier to take on the cron job, as we can only have one server run the crons throughout the group.
Is there a way to tell the ebextensions config file to only run the config command on the worker tier? Something like worker_only: true would be great here, but it doesn't seem to exist.
Can anybody provide some insight on how I might achieve this? Thanks.
Set an Environment Property like "tier=worker". Elastic Beanstalk --> Application --> Environment --> Configuration --> Software Configuration.
Use the "test" attribute of the "command" key to test for this property, so the command only get executed when the environment property is set.
Sample from the AWS doc:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
commands:
python_install:
command: myscript.py
cwd: /home/ec2-user
env:
myvarname: myvarvalue
test: '[ ! /usr/bin/python ] && echo "python not installed"'

How to integrate Atlassian Bamboo with AWS Elastic Beanstalk

I want to integrate Atlassian Bamboo with AWS Elastic Beanstalk. Is there anyway to do this?
It depends a bit on your Bamboo and beanstalk config as well as the type of application you are planning to deploy on AWS Beanstalk.
We did some things for Java Web Apps:
Since Bamboo understands maven, you can have a look at the following maven plugin:
http://beanstalker.ingenieux.com.br/beanstalk-maven-plugin/configurations-and-templates.html
We are using it for some environments to create wars and upload them to elastic beanstalk. You can then create a maven task in bamboo to call the plugin.
If you downloaded and installed Bamboo on a machine you own yourself you could use the Elastic Beanstalk command line interface (CLI).
This is probably the most powerful approach, but you need to install the CLI on the bamboo instance. Then you can do almost anything. This approach should also work for other environments besides Java/Tomcat.
Another idea:
If you use Beanstalk using git (i.e. you deploy by making a code change and pushing to Beanstalk), then you can also use the new "Deployment Project" Feature in Bamboo to push the code once it passes all tests.
David's answer provides good options for cross product usage of AWS Elastic Beanstalk (+1). Nowadays I'd recommend the excellent unified AWS Command Line Interface over the now legacy AWS Elastic Beanstalk API Command Line Interface, see the resp. AWS CLI commands for elasticbeanstalk.
If you are looking for a Bamboo specific solution, you might be interested in Utoolity's Tasks for AWS (Bamboo) add-on (commercial, see disclaimer), which provides three dedicated tasks, specifically:
AWS Elastic Beanstalk Application - create, update or delete AWS Elastic Beanstalk applications.
AWS Elastic Beanstalk Application Version - create, update or delete AWS Elastic Beanstalk application versions.
AWS Elastic Beanstalk Environment - create, update, rebuild, restart, swap or terminate AWS Elastic Beanstalk environments and specify configuration settings and advanced options.
Disclaimer: I'm the co-founder of this add-on's vendor, Utoolity.
In case you're interested in C# deployments:
What we do is to simply start the awsdeploy tool (should already be installed on the build server) with a link to the configuration script. I create the environment simply in Visual Studio and when I redeploy the application once, I save the script. Once the script is on the build server, I reference it in the deployment configuration with awsdeploy /r c:\location\of\myscript.txt.
The package itself the is referenced in the AWS deployment configuration script is created at build time with the MSbuild /target:package command and defined as an artifact (default location of the ZIP package is c:\build-dir\...\project\obj\debug\package, but can be overwritten.
Everything works pretty well so far, although I am having problem to start an elastic instance when none is available (e.g. nightly builds).
Take a look at our repo: https://github.com/matzegebbe/docker-aws-login
With that snippet you are able to login with the aws an push images
simple bamboo task script (of course you need docker installed on the agents):
#!/bin/bash
docker images hellmann/awscli | grep -q awscli
[ "$?" -eq "0" ] && exit 0
cat <<'EOF' >> Dockerfile
FROM python
MAINTAINER Mathias Gebbe <mathias.gebbe#hellmann.net>
RUN pip install awscli --ignore-installed six
ENV aws_access_key_id AWS_ACCESS_KEY
ENV aws_secret_access_key AWS_SECRET_ACCESS_KEY
RUN mkdir /root/.aws/
RUN printf "[default]\nregion = eu-west-1\n" > /root/.aws/config
RUN printf "[default]\naws_access_key_id = ${aws_access_key_id}\naws_secret_access_key = ${aws_secret_access_key}\n" > /root/.aws/credentials
ENTRYPOINT ["/bin/bash","-c"]
CMD ["aws ecr get-login"]
EOF
docker build -t hellmann/awscli .
$(docker run --rm hellmann/awscli)