Publish beanstalk environment hook issues - amazon-web-services

I have an issue with my script. I use beanstalk to deploy my ASP.NET Core code. And in my post deploy I have this code:
#!/usr/bin/env bash
file1=sudo cat /opt/elasticbeanstalk/config/ebenvinfo/region
file2=/opt/elasticbeanstalk/bin/get-config container -k environment_name
file3=$file2.$file1.elasticbeanstalk.com
echo $file3
sudo certbot -n -d $file3 --nginx --agree-tos --email al#gmail.com
It works perfectly if I launch it on the instance but in the postdeploy script I have the error:
[ERROR] An error occurred during execution of command [app-deploy] - [RunAppDeployPostDeployHooks]. Stop running the command. Error: Command .platform/hooks/postdeploy/00_get_certificate.sh failed with error fork/exec .platform/hooks/postdeploy/00_get_certificate.sh: exec format error
PS: My script has .ebextension which allows exec rights
container_commands:
00_permission_hook:
command: "chmod +x .platform/hooks/postdeploy/00_get_certificate.sh"
What's wrong?

I had the same issue and added
#!/bin/bash
to the top of the sh file and also ran "chmod +x" to the sh file and it was solved

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

docker image runs ok locally but in ECS I get a message: executable file not found in $PATH

I've a weird error, I'm trying to run a python script in ECS, the dockerfile is pretty basic:
FROM python:3.8
COPY . /
RUN pip install -r requirements.txt
CMD ["python", "./get_historical_data.py"]
building this in my local machine works perfect,
docker run --network=host historical-price
I uploaded this image to ECR and run on ECS, a basic config, just set container name, pointing the Image to my ECR repo and set some environment variables...when I run this I get
Status reason CannotStartContainerError: Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: exec: "python": executable file not found in $PATH: unknown
but (really weird) if I enter in the EC2 server and run the container manually
docker run -it -e TICKER='SOL/USDT' -e EXCHANGE='BINANCE' -e DB_HOST='xxx' -e DB_NAME='xxx' -e DB_PASSWORD='xxx' -e DB_PORT='xxx' -e DB_USER='xxx' xxx.dkr.ecr.ap-southeast-2.amazonaws.com/xxx:latest /bin/bash
I can see this running ok...
I've tried several dockerfiles, using
CMD python ./get_historical_data.py
or using python3 command instead of python
also I tried to skip the CMD command in the Dockerfile and add this in the ECS task definition
nothing work...
I really don't know what can be happen here because the last week I ran a similar task and this worked perfectly, hope you can help me
thank you, please let me know if you need more details

Installing Anaconda on Amazon Elastic Beanstalk to use in Django application

I have a Django application which it's deployed to Amazon Elastic Beanstalk. I have to install anaconda for installing pythonocc-core package. I have created a .config file in .ebextensions folder and add the anaconda path in my wsgi.py file such as below and I have deployed it successfully.
.config file:
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_conda_activate_installation:
command: 'source ~/.bashrc'
wsgi.py:
sys.path.append('/anaconda/lib/python3.7/site-packages')
However when I add the 04_conda_install_pythonocc command below to the continuation of this .config file, I got command failed error.
04_conda_install_pythonocc:
command: 'conda install -c dlr-sc pythonocc-core=7.4.0'
I ssh into the instance for checking. I saw the /anaconda folder has occured. When I checked with the conda --version command, I got the -bash: conda: command not found error.
Afterwards, I thought there might be a problem with the PATH and I edited the .config file as follows and I have deployed this .config file successfully.
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_add_path:
command: 'export PATH=$PATH:$HOME/anaconda/bin'
04_conda_activate_installation:
command: 'source ~/.bashrc'
But when I add the conda_install_pythonocc command again to the continuation of this edited version of .config file, it failed again and I got command failed.
In manually, all the commands work but they don't work in my .config file.
How can I fix this issue and install package with conda?
I tried to replicated the issue on my sandbox account, and I successful installed conda using the following (simplified) config file on 64bit Amazon Linux 2 v3.0.3 running Python 3.7:
.ebextensions/60_anaconda.config
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
05_conda_install:
command: '/anaconda/bin/conda install -y -c dlr-sc pythonocc-core=7.4.0'
Note the use off absolute path /anaconda/bin/conda and -y to not ask for manual confirmations. I only verified installation procedure, not how to use it afterwards (e.g. not how to use it in python application). Thus you will probably need to adjust it to your needs.
The EB log file showing successful installation is also provided for your reference (shortened for simplicity):
/var/log/cfn-init-cmd.log

Dockerfile for awscli

I am trying to create a docker file that will install awscli and run the command to list s3. Once the command is executed the container itself exits.I builrd the image with this command docker build --tag aws-cli:1.0 . I am running the this docker file after building it with this command docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' aws-cli
Error: Unable to find image 'aws-cli:latest' locally docker: Error response from daemon: pull access denied for aws-cli, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
FROM python:2.7-alpine3.10
ENV AWS_DEFAULT_REGION='[your region]'
ENV AWS_ACCESS_KEY_ID='[your access key id]'
ENV AWS_SECRET_ACCESS_KEY='[your secret]'
RUN pip install awscli
CMD s3 ls
ENTRYPOINT [ "awscli" ]
You are missing the image name in the docker run command. It should be like this
docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' <docker image>
You missed image name. Please provide image name while running docker run. like this
docker run -it --rm -e AWS_DEFAULT_REGION='[your region]' -e AWS_ACCESS_KEY_ID='[your access ID]' -e AWS_SECRET_ACCESS_KEY='[your access key]' aws-cli:1.0

Codeship deploy custom script error: "bash: npm: command not found bash: pm2: command not found"

I m trying to automatically run "npm install" and "pm2 restart all" whenever codeship deployed my codes onto DigitalOcean.
This is the custom script:
rsync -avz -e "ssh" ~/clone/ root#IP:/opt/projectname
ssh root#IP 'cd /opt/projectname/; npm install; pm2 restart all'
The rsync works. Codes will be deployed onto the correct folder on DigitalOcean.
However, the second line fails. Error:
bash: npm: command not found
bash: pm2: command not found
Why?