I'm currently trying to create a worker on AWS Elastic Beanstalk which is pulling messages from a specific SQS queue (with the help of the Symfony messenger). I don't want to use dedicated worker instances for this task. After some research, I found out that systemd can help here which is enabled by default on the new Amazon Linux 2 instances.
However, I'm not able to create a running systemd service. Here is my .ebextensions/03_workers.config file:
files:
/etc/systemd/system/my_worker.service:
mode: "000755"
owner: root
group: root
content: |
[Unit]
Description=My worker
[Service]
User=nginx
Group=nginx
Restart=always
ExecStart=/usr/bin/nohup /usr/bin/php /var/app/current/bin/console messenger:consume integration_incoming --time-limit=60
[Install]
WantedBy=multi-user.target
services:
systemd:
my_worker:
enabled: "true"
ensureRunning: "true"
I can't see my service running if I'm running this command:
systemctl | grep my_worker
What am I doing wrong? :)
systemd is not supported in Services. The only correct is sysvinit:
services:
sysvinit:
my_worker:
enabled: "true"
ensureRunning: "true"
But I don't think it will even work, as this is for Amazon Linux 1, not for Amazon Linux 2.
In Amazon Linux 2 you shouldn't be even using much of .ebextensions. AWS docs specifically write:
On Amazon Linux 2 platforms, instead of providing files and commands in .ebextensions configuration files, we highly recommend that you use Buildfile. Procfile, and platform hooks whenever possible to configure and run custom code on your environment instances during instance provisioning.
Thus, you should consider using Procfile which does basically what you want to achieve:
Use a Procfile for long-running application processes that shouldn't exit. Elastic Beanstalk expects processes run from the Procfile to run continuously. Elastic Beanstalk monitors these processes and restarts any process that terminates. For short-running processes, use a Buildfile.
Alternative
Since you already have created a unit file /etc/systemd/system/my_worker.service for systemd, you can enable and start it yourself.
For this container_commands in .ebextensions can be used. For example:
container_commands:
10_enable_worker:
command: systemctl enable worker.service
20_start_worker:
command: systemctl start worker.service
It's not officially documented, but you can use a systemd service in Amazon Linux 2.
A block like the following should work:
services:
systemd:
__SERVICE_NAME__:
enabled: true
ensureRunning: true
Support for a "systemd" service is provided by internal package /usr/lib/python3.7/site-packages/cfnbootstrap/construction.py which lists recognized service types: sysvinit, windows, and systemd
class CloudFormationCarpenter(object):
_serviceTools = {"sysvinit": SysVInitTool, "windows": WindowsServiceTool, "systemd": SystemDTool}
Note that a systemd service must support chkconfig and in particular your launch script at /etc/init.d/__SERVICE_NAME__ must include a "chkconfig" and "description" line similar to:
# chkconfig: 2345 70 60
# description: Continuously logs Nginx status.
If you don't support chkconfig correctly then chkconfig --list __SERVICE_NAME__ will print an error, and attempting to deploy to Elastic Beanstalk will log a more detailed error in /var/log/cfn-init.log when it tries to start the service.
Related
I have a AWS instance with Docker installed on it. And some containers are running.I have setup one Laravel project inside docker.
I can access this web application through AWS IP address as well as DNS address(GoDaddy).
I have also designed gitlab CI/CO to publish the code to AWS instance.
When I try to push the code through Gitlab pipelines, I am getting following error in pipeline.
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I checked the docker, it is running properly. Any clues please.
.gitlab-ci.yml
http://pastie.org/p/7ELo6wJEbFoKaz7jcmJdDp
the pipeline failing at deploy-api-staging: -> script -> scripts/ci/build
build script
http://pastie.org/p/1iQLZs5GqP2m5jthB4YCbh
deploy script
http://pastie.org/p/2ho6ElfN2iWRcIZJjQGdmy
From what I see, you have directly installed and registered the GitLab runner on your EC2 instance.
I think the problem is that you haven't already given permissions to your GitLab Runner user to use Docker.
From the official Docker documentation:
The Docker daemon binds to a Unix socket instead of a TCP port. By default that Unix socket is owned by the user root and other users can only access it using sudo. The Docker daemon always runs as the root user.
If you don’t want to preface the docker command with sudo, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
Well, GitLab Runners use the user gitlab-runner by default when they're running any CI/CD Pipeline and that user won't use sudo (neither it should be in the sudoers file!) so we have to correctly configure it.
First of all, create a Docker group on the EC2 where the GitLan Runner is registered:
sudo groupadd docker
Then, we are going to add the user gitlab-runner to that group:
sudo usermod -aG docker gitlab-runner
And we are going to verify that the gitlab-runner user actually has access to Docker:
sudo -u gitlab-runner -H docker info
Now your Pipelines should be able to access without any problem to the Unix socket under unix:///var/run/docker.sock.
Additional Steps if using Docker Runners
If you're using the Docker executor in your runner, you have to now mount that Unix socket on the Docker image you're using.
[[runners]]
url = "https://gitlab.com/"
token = REGISTRATION_TOKEN
executor = "docker"
[runners.docker]
tls_verify = false
image = "docker:19.03.12"
privileged = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
Take special care of the contents in the volume clause.
I read the AWS X-RAY and AWS Elastic Beanstalk documentation and wonder the question why do they say that X-RAY daemon should be run as extension. As I know Elastic Beanstalk can run my application as docker container. Can I just run the daemon inside that container?
Documentation:
Here they say that we should run the X-RAY daemon as extension:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-beanstalk.html
Here they show how to run that daemon inside Docker:
https://docs.aws.amazon.com/xray/latest/devguide/xray-daemon-local.html
You should edit your dockerFile to have the X-ray Daemon
FROM amazonlinux
RUN yum install -y unzip
RUN curl -o daemon.zip https://s3.us-east-2.amazonaws.com/aws-xray-assets.us-east-2/xray-daemon/aws-xray-daemon-linux-3.x.zip
RUN unzip daemon.zip && cp xray /usr/bin/xray
ENTRYPOINT ["/usr/bin/xray", "-t", "0.0.0.0:2000", "-b", "0.0.0.0:2000"]
EXPOSE 2000/udp
EXPOSE 2000/tcp
I would like to restart php-fpm and apache on my Amazon Linux 1 instances deployed via elastic beanstalk. I'm using a load balanced environment and want to automate the entire deploy process.
Does anybody have any additional information on this post from aws?
I'm simple trying to use a yaml file too restart these services (gracefully) after each deploy.
example:
services:
sysvinit:
myservice:
enabled: true
ensureRunning: true
commands:
- "command name used as trigger"
I'm really not sure what is acceptable input for the "myservices" section. Any help is appricated.
Thanks!
I haven't verified the following, but my understanding is as follows. Since you have to restart php-fpm and apache services your service section could be:
services:
sysvinit:
php-fpm:
enabled: true
ensureRunning: true
commands:
- 01_some_command_name
apache:
enabled: true
ensureRunning: true
commands:
- 02_some_other_command_name
The 01_some_command_name and 02_some_other_command_name are names of commands in commands section. For example:
commands:
01_some_command_name:
command: echo "execute command 01"
02_some_other_command_name:
command: echo "execute command 02"
Execution of the two commands, should trigger in my view the restart of the php-fpm and apache services.
I have an elastic beanstalk application that utilises both the web tier and the worker tier. Jobs are offloaded onto the worker tier from the web tier via SQS to keep the web-facing servers speedy. Both environments use the exact same codebase, and use an RDS instance under them.
I need to run a cron job on the leader server of the worker tier. I've created a .ebextensions folder with a file called crontab in it as follows (it's a Laravel web app):
* * * * * root php /var/www/html/artisan do:something:with:database
Then, I've created a file called 01cronjobs.config, which updates the environments crontab under root as follows:
container_commands:
01_remove_old_cron_jobs:
command: "crontab -r || exit 0"
02_cronjobs:
command: "cat .ebextensions/crontab | crontab"
leader_only: true
.. all good. Now, I want to deploy this to EB using the eb deploy command. However, I only want the worker tier to take on the cron job, as we can only have one server run the crons throughout the group.
Is there a way to tell the ebextensions config file to only run the config command on the worker tier? Something like worker_only: true would be great here, but it doesn't seem to exist.
Can anybody provide some insight on how I might achieve this? Thanks.
Set an Environment Property like "tier=worker". Elastic Beanstalk --> Application --> Environment --> Configuration --> Software Configuration.
Use the "test" attribute of the "command" key to test for this property, so the command only get executed when the environment property is set.
Sample from the AWS doc:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
commands:
python_install:
command: myscript.py
cwd: /home/ec2-user
env:
myvarname: myvarvalue
test: '[ ! /usr/bin/python ] && echo "python not installed"'
I'm banging my head against a wall trying to both install and then enable a service in elastic beanstalk. What I want to do is:
Install a service in /etc/init.d that points to my python app in /opt/python/current/app/
Have Elastic Beanstalk start and keep-alive the service, as specified in an .ebextensions/myapp.config file.
(Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services)
Here's my .ebextensions/myapp.config file:
container_commands:
01_copy_service:
command: "cp /opt/python/ondeck/app/my_service /etc/init.d/"
02_chmod_service:
command: "chmod +x /etc/init.d/my_service"
services:
sysvinit:
my_service:
enabled: true
ensureRunning: true
files : [/etc/init.d/my_service]
This fails because services are run before container_commands. If I comment out services, deploy, then uncomment services, then deploy again, it will work. But I want to have a single-step deploy, because this will be an auto-scaling node.
Is there a solution? Thanks!
Nate, I have the exact same scenario as you and I solved it this way:
Drop the "services" section and add a "restart" command.
container_commands:
...
03_restart_service:
command: /sbin/service my_service restart
You can cause the service to restart after a command is run by using a commands: key under the services: key. The documentation for the services: key is here:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-services
I haven't done it myself, but I want to give you some ideas which should work. It's just the matter of convenience and the workflow.
Since it is not really application file, but rather EC2 file, and unlikely to be changed often, you can do one of the following:
Use files content to create the service init script. You can even have a specific config file just for that script.
Store service init script on S3 and copy the contents with command.
Create dummy service script, replace the contents with the one from deployment with container command and dependency on the above command to the service.
(this one is heavy) Create custom AMI and specify it in Autoscaling configuration.
Hope it helps.