I have a Django web application that is deployed to AWS elastic beanstalk (Python 3.7 running on 64bit Amazon Linux 2/3.1.3). I am trying to run the following config file
files:
"/usr/local/bin/cron_tab.sh":
mode: "000755"
owner: root
group: root
content: |
#!/bin/bash
exec &>> /tmp/cron_tab_log.txt
date > /tmp/date
source /var/app/venv/staging-LQM1lest/bin/activate
cd /var/app/current
python manage.py crontab add
exit 0
container_commands:
cron_tab:
command: "curl /usr/local/bin/cron_tab.sh | bash"
This file placed in the .ebextentions folder. All other config files are working properly. However, this one is not working. Also, I have tried to run the container_commands code manually on SSH and it gives output such as below.
curl: (3) <url> malformed
I also checked the /tmp folder but there is no cron_tab_log.txt. I checked /usr/local/bin the cron_tab.sh is located there.
I just want this Django-crontab run after the deploy and it doesn't work. How can I handle this issue?
Curl is used for web url call not executing a script, I think you need to change the last line in your config file to be:
command: "sudo /usr/local/bin/cron_tab.sh"
Related
I have a Serverless application using Localstack, I am trying to get fully running via Docker.
I have a docker-compose file that starts localstack for me.
version: '3.1'
services:
localstack:
image: localstack/localstack:latest
environment:
- AWS_DEFAULT_REGION=us-east-1
- EDGE_PORT=4566
- SERVICES=lambda,s3,cloudformation,sts,apigateway,iam,route53,dynamodb
ports:
- '4566-4597:4566-4597'
volumes:
- "${TEMPDIR:-/tmp/localstack}:/temp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
When I run docker-compose up then deploy my application to localstack using SLS deploy everything works as expected. Although I want docker to run everything for me so I will run a Docker command and it will start localstack and deploy my service to it.
I have added a Dockerfile to my project and have added this
FROM node:16-alpine
RUN apk update
RUN npm install -g serverless; \
npm install -g serverless-localstack;
EXPOSE 3000
CMD ["sls","deploy", "--host", "0.0.0.0" ]
I then run docker build -t serverless/docker . followed by docker run -p 49160:3000 serverless/docker but am receiving the following error
This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I guess this is what would happen if I tried to run SLS deploy in the incorrect folder. So I have logged into the docker container and cannot see my app that i want to run there, what am i missing in dockerfile that is needed to package it up?
Thanks
Execute the pwd command inside the container while running it. Try
docker run -it serverless/docker pwd
The error showing, sls not able to find the config file in the current working directory. Either add your config file to your current working directory (Include this copying in Dockerfile) or copy it to specific location in container and pass --config in CMD (sls deploy --config)
This command can only be run in a Serverless service directory. Make
sure to reference a valid config file in the current working directory
Be sure that you have serverless installed
Once installed create a service
% sls create --template aws-nodejs --path myService
cd to folder with the file, serverless.yml
% cd myService
This will deploy the function to AWS Lambda
% sls deploy
So I use AWS Elastic Beanstalk to serve my PHP application. I want to mount EFS to have permanent storage for the images uploaded via my application.
I have created .ebextensions folder and created one file called mount.config with the below code
packages:
yum:
nfs-utils: []
jq: []
files:
"/tmp/mount-efs.sh" :
mode: "000755"
content: |
#!/usr/bin/env bash
mkdir -p /mnt/efs
EFS_NAME=$(/opt/elasticbeanstalk/bin/get-config environment | jq -r '.EFS_NAME')
mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $EFS_NAME:/ /mnt/efs || true
mkdir -p /mnt/efs/questions
chown webapp:webapp /mnt/efs/questions
commands:
01_mount:
command: "/tmp/mount-efs.sh"
container_commands:
01-symlink-uploads:
command: ln -s /mnt/efs/questions /var/app/ondeck/images/
Everything is working fine until the last line where it fails to create a symlink.
What I have tried so far:
Running the command directly on the machine while changing ondeck -> current. This works fine.
Removing the EC2 instance and adding a new one. Still failing
In the logs I see
ln: failed to create symbolic link '/var/app/current/images/questions': No such file or directory
Any suggestion what could be the reason?
Ok, I fixed it by replacing ondeck with staging
And adding this line under container_commands:
01-change-permission:
command: chmod -R 777 /var/app/staging/images
The simple goal:
I would like to have two containers both running on my local machine. One jenkins container & one SSH server container. Then, jenkins job could connect to the SSH server container & execute aws command to upload file to S3.
My workspace directory structure:
a docker-compose.yml (details see below)
a directory named centos/,
Inside centos/ I have a Dockerfile for building the SSH server image.
The docker-compose.yml:
In my docker-compose.yml I declared the two containers(services).
One jenkins container, name jenkins.
One SSH server contaienr, named remote_host.
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
remote_host:
container_name: remote_host
image: remote-host
build:
context: centos7
networks:
- net
networks:
net:
The Dockerfile for the remote_host is like this (Notice the last RUN installs the AWS CLI):
FROM centos
RUN yum -y install openssh-server
RUN useradd remote_user && \
echo remote_user:1234 | chpasswd && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN ssh-keygen -A
RUN rm -rf /run/nologin
RUN yum -y install unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install
Current situation with the above setup:
I run docker-compose build and docker-compose up. Both jenkins container and the remote_host(SSH server) container are up and running successfully.
I can go inside jenkins container by :
$ docker exec -it jenkins bash
jenkins#7551f2fa441d:/$
I can successfully ssh to the remote_host container by:
jenkins#7551f2fa441d:/$ ssh -i /tmp/remote-key remote_user#remote_host
Warning: the ECDSA host key for 'remote_host' differs from the key for the IP address '172.19.0.2'
Offending key for IP in /var/jenkins_home/.ssh/known_hosts:1
Matching host key in /var/jenkins_home/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
[remote_user#8c203bbdcf72 ~]$
Inside the remote_host container, I have also configured my AWS access key and secret key under ~.aws/credentials:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
I can successfully run aws command to upload a file from remote_host container to my AWS S3 bucket. Like:
[remote_user#8c203bbdcf72 ~]$ aws s3 cp myfile s3://mybucket123asx/myfile
What the issue is
Now, I would like my jenkins job to execute the aws command to upload file to S3. So I created a shell script inside my remote_host container, the script is like this:
#/bin/bash
BUCKET_NAME=$1
aws s3 cp /tmp/myfile s3://$BUCKET_NAME/myfile
In my jenkins, I have configured the SSH & in my jenkins job configuration, I have:
As you can see , it simply runs the script located in the remote_host container.
When I build the jenkins job, I always get the error in console : upload failed: ../../tmp/myfile to s3://mybucket123asx/myfile Unable to locate credentials.
Why the same s3 command works when executing in the remote_host container but not working when run from jenkins job?
I also tried explicitly export the aws key id & secrete key in the script. (bear in mind that I have the ~.aws/credentils configured in remote_host, which works without explicitly exporting the aws secret key)
#/bin/bash
BUCKET_NAME=$1
export aws_access_key_id=AKAARXL1CFQNN4UV5TIO
export aws_secret_access_key=MY_SECRETE_KEY
aws s3 cp /tmp/myfile s3://$BUCKET_NAME/myfile
OK, I solved my issue by changing the export statement to capital case. So, the cause of the issue is that when jenkins run the script, it runs as remote_user on remote_host. Though on remote_host I have the ~/.aws/credentials setup, but that file only have read permission for users other than root:
[root#8c203bbdcf72 /]# ls -l ~/.aws/
total 4
-rw-r--r-- 1 root root 112 Sep 25 19:14 credentials
That's why when jenkins run the script to upload file to S3 got Unable to locate credentials failure. Because the credentials file can't be read by remote_user. So, I have to still uncomment the lines which exports aws key id and secret key. #Marcin's comment is helpful that the letters need to be capital letters, otherwise it would not work.
So, overall, what I did to fix the issue is to update my script with:
export aws_access_key_id=AKAARXL1CFQNN4UV5TIO
export aws_secret_access_key=MY_SECRETE_KEY
Hi I want to use goofys on AWS ElasticBeanstalk php 7.0 environment.
I create .ebextentions/00_install_goofy.config.
(install golang from binary because golang version by yum is old.
packages:
yum:
fuse: []
commands:
100_install_golang_01:
command: wget https\://storage.googleapis.com/golang/go1.9.linux-amd64.tar.gz
100_install_golang_02:
command: tar -C /usr/local -xzf go1.9.linux-amd64.tar.gz
100_install_golang_03:
command: export GOROOT=/usr/local/go
test: [ -z "$GOROOT" ]
100_install_golang_04:
command: export GOPATH=/home/ec2-user/go
test: [ -z "$GOPATH" ]
100_install_golang_05:
command: export PATH=$PATH\:$GOROOT/bin\:$GOPATH/bin
100_install_golang_06:
command: echo $GOPATH > gopath
But 100_install_golang_03 not work well...
Test for Command 100_install_golang_03
[2017-09-09T14:39:52.422Z] INFO [3034] - [Application deployment app-f68c-170909_143641#1/StartupStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild/prebuild_1_yubion_website] : Completed activity.
[2017-09-09T14:39:52.422Z] INFO [3034] - [Application deployment app-f68c-170909_143641#1/StartupStage0/EbExtensionPreBuild/Infra-EmbeddedPreBuild] : Activity execution failed, because: [Errno 2] No such file or directory (ElasticBeanstalk::ExternalInvocationError)
I cant export env and path. Can I set PATH on .ebextensions?
Or is there better way to install goofys on ElasticBeanstalk automatically.
finally I find commands defined by .ebextensions run NO EVIRONMET VALUE.
It work on an environment like sandbox.
So scope of "export" commands is only "command" section.
if you want to use PATH in commands, you have to add export command to every commands.
Additionally if you want use PATH after eb deployed, see following link.
How can I add PATH on Elastic Beanstalk
I have a django app running inside a single docker container on AWS Elastic Beanstalk. I cannot get it to run migrations properly, it always sees the old docker image and tries to run migrations from that (but it doesn’t have the latest files).
I package an .ebextensions directory with my EBS source bundle (a zip containing a Dockerrun.aws.json file and the .ebextensions dir). And it has a setup.config file that looks like this:
container_commands:
01_migrate:
command: "CONTAINER=`docker ps -a --no-trunc | grep aws_beanstalk | cut -d' ' -f1 | head -1` && docker exec $CONTAINER python3 manage.py migrate"
leader_only: true
Which is partially modeled after the comments on this SO question.
I have verified that it can work if I simply re-deploy the app a second time, since this time the previous running image will have the updated migrations file.
Does anyone know how to access the latest docker image or latest running container in an .ebextensions script?
Based on AWS Documentation on Customizing Software on Linux Servers, container_commands will be executed before your app is deployed.
You can use the container_commands key to execute commands for your container. The commands in container_commands are processed in alphabetical order by name. They run after the application and web server have been set up and the application version file has been extracted, but before the application version is deployed. They also have access to environment variables such as your AWS security credentials. Additionally, you can use leader_only. One instance is chosen to be the leader in an Auto Scaling group. If the leader_only value is set to true, the command runs only on the instance that is marked as the leader.
Take a look also into my answer in here. It run some command in different app deployment state and give the command result.
So, your problem solution might be create an post app deployment hook.
.ebextensions/00_post_migrate.config
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/10_post_migrate.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
if [ -f /tmp/leader_only ]
then
rm /tmp/leader_only
docker exec `docker ps --no-trunc -q | head -n 1` python3 manage.py migrate
fi
container_commands:
01_migrate:
command: "touch /tmp/leader_only"
leader_only: true
I am using another approach. What I did is run a container based on the newly build image, then pass in the environment variables from Elastic Beanstalk and run the custom command in that container. When that command is done, it will remove itself and proceed with the deployment.
So this is the script I have put inside .ebextensions/scripts/container_command.sh (make sure you replace everything that is within <>):
#!/bin/bash
COMMAND=$1
EB_CONFIG_DOCKER_IMAGE_STAGING=$(/opt/elasticbeanstalk/bin/get-config container -k <environment_name>_image)
EB_SUPPORT_FILES=$(/opt/elasticbeanstalk/bin/get-config container -k support_files_dir)
# build --env arguments for docker from env var settings
EB_CONFIG_DOCKER_ENV_ARGS=()
while read -r ENV_VAR; do
EB_CONFIG_DOCKER_ENV_ARGS+=(--env "${ENV_VAR}")
done < <($EB_SUPPORT_FILES/generate_env)
docker run --name=shopblender_pre_deploy -d \
"${EB_CONFIG_DOCKER_ENV_ARGS[#]}" \
"${EB_CONFIG_DOCKER_IMAGE_STAGING}"
docker exec shopblender_pre_deploy ${COMMAND}
# clean up
docker stop shopblender_pre_deploy
docker rm shopblender_pre_deploy
Now, you can use this script to execute any custom command to the container that will be deployed later.
Something like this .ebextensions/container_commands.config:
container_commands:
01-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console doctrine:schema:update --force --no-interaction" &>> /var/log/database.log
leader_only: true
02-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console fos:elastica:reset --no-interaction" &>> /var/log/database.log
leader_only: true
03-command:
command: bash .ebextensions/scripts/container_command.sh "php app/console doctrine:fixtures:load --no-interaction" &>> /var/log/database.log
leader_only: true
This way you also do not need to worry about what your latest started container is, which is a problem with the solution described above.