AWSEBCLI No such file or directory - amazon-web-services

I'm attempting to run awsebcli from inside a docker image based on amazonlinux
The docker file is like this..
FROM amazonlinux:latest
ENV PATH "$PATH:/root/.local/bin"
ADD . /myfiles
WORKDIR /myfiles
#copy credentials
RUN cp -R .aws ~
RUN curl -O https://bootstrap.pypa.io/get-pip.py
RUN python get-pip.py --user --no-warn-script-location
RUN pip install awsebcli --upgrade --user
CMD eb --version
This just returns:
ERROR: OSError - [Errno 2] No such file or directory
What have I missed?

This was a dumb issue.
I has named the elastic beanstalk config file just "config" (like the .aws/config)
It was supposed to be called .elasticbeanstalk/config.yml

I had the same problem and it turned out that in my configuration file under .elasticbeanstalk/config.yml I had the following line
sc: git
This makes the eb tool search for a .git which was not present in my case since I only wanted to deploy a zip file.
The error message is far from clear !

Related

Installing Anaconda on Amazon Elastic Beanstalk to use in Django application

I have a Django application which it's deployed to Amazon Elastic Beanstalk. I have to install anaconda for installing pythonocc-core package. I have created a .config file in .ebextensions folder and add the anaconda path in my wsgi.py file such as below and I have deployed it successfully.
.config file:
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_conda_activate_installation:
command: 'source ~/.bashrc'
wsgi.py:
sys.path.append('/anaconda/lib/python3.7/site-packages')
However when I add the 04_conda_install_pythonocc command below to the continuation of this .config file, I got command failed error.
04_conda_install_pythonocc:
command: 'conda install -c dlr-sc pythonocc-core=7.4.0'
I ssh into the instance for checking. I saw the /anaconda folder has occured. When I checked with the conda --version command, I got the -bash: conda: command not found error.
Afterwards, I thought there might be a problem with the PATH and I edited the .config file as follows and I have deployed this .config file successfully.
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_add_path:
command: 'export PATH=$PATH:$HOME/anaconda/bin'
04_conda_activate_installation:
command: 'source ~/.bashrc'
But when I add the conda_install_pythonocc command again to the continuation of this edited version of .config file, it failed again and I got command failed.
In manually, all the commands work but they don't work in my .config file.
How can I fix this issue and install package with conda?
I tried to replicated the issue on my sandbox account, and I successful installed conda using the following (simplified) config file on 64bit Amazon Linux 2 v3.0.3 running Python 3.7:
.ebextensions/60_anaconda.config
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
05_conda_install:
command: '/anaconda/bin/conda install -y -c dlr-sc pythonocc-core=7.4.0'
Note the use off absolute path /anaconda/bin/conda and -y to not ask for manual confirmations. I only verified installation procedure, not how to use it afterwards (e.g. not how to use it in python application). Thus you will probably need to adjust it to your needs.
The EB log file showing successful installation is also provided for your reference (shortened for simplicity):
/var/log/cfn-init-cmd.log

How can I use docker volume with ubuntu image to access a downloaded file from AWS S3?

I want to copy a file from AWS S3 to a local directory through a docker container.
This copying command is easy without docker, I can see the file downloaded in the current directory.
But the problem is with docker that I don’t even know how to access the file.
Here is my Dockerfile:
FROM ubuntu
WORKDIR "/Users/ezzeldin/s3docker-test"
RUN apt-get update
RUN apt-get install -y awscli
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
CMD [ "aws", "s3", "cp", "s3://ezz-test/s3-test.py", "." ]
The current working folder that I should see the file downloaded to is s3docker-test/. This is what I'm doing after building the Dockerfile to mount a volume myvol to the local directory
docker run -d --name devtest3 -v $PWD:/var/lib/docker/volumes/myvol/_data ubuntu
So after running the image I get this:
download: s3://ezz-test/s3-test.py to ./s3-test.py
which shows that the file s3-test.py is already downloaded, but when I run ls in the interactive terminal I can't see it. So how can I access that file?
Looks like you are overriding containers folder with your empty folder, when you run -v $PWD:/var/lib/docker/volumes/myvol/_data.
Try to simply copy the files from container to host fs by running:
docker cp \
<containerId>:/Users/ezzeldin/s3docker-test/s3-test.py \
/host/path/target/s3-test.py
You could perform this command even on downed container. But first you will have to run it without folder override:
docker run -d --name devtest3 ubuntu

CodeDeploy hooks running scripts in the agent instalation folder

So, I'm setting up my first application that uses CodeDeploy (EC2 + S3) and I'm having a very hard time to figure out how to run the scripts after instalation.
So I defined an AfterInstall hook in the AppSpec file refering to my bash script file in the project diretory. When the commands in the script run I get the error stating the files could not be found. So I put an ls command before it all and checked the logs.
My script file is running in the CodeDeploy agent folder. There are many files there that I accidentally created when I was testing but I was expecting them to be in my project root folder.
--Root
----init.sh
----requirements.txt
----server.py
appspec.yml
version: 0.0
os: linux
files:
- source: ./
destination: /home/ubuntu/myapp
runas: ubuntu
hooks:
AfterInstall:
- location: init.sh
timeout: 600
init.sh
#!/bin/bash
ls
sudo apt install python3-pip
pip3 install -r ./requirements.txt
python3 ./server.py
So when ls is executed, it doesn't list the files in my project root directory. I also tried ${PWD} instead of ./ and it didn't work. It is copying the script file to the agent folder and running it.
Refer to this https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
This is written at the end of the above document
The location of scripts you specify in the 'hooks' section is relative
to the root of the application revision bundle. In the preceding
example, a file named RunResourceTests.sh is in a directory named
Scripts. The Scripts directory is at the root level of the bundle.
But apparently it refers to the paths in the appspec file only.
Could someone help? Is it correct? I MUST use absolute paths hard-coded in the script file?
Yes correct. The script doesn't execute in the destination folder, as you might expect. You need to hard code a reference the destination directory /home/ubuntu/myapp to resolve file paths in life cycle scripts.
Use cd to change the directory first:
cd /home/ubuntu/myapp
ls
sudo apt install python3-pip
pip3 install -r ./requirements.txt
python3 ./server.py

Docker Image command pythonreturning non-zero code

So I'm trying to build a new docker image with Python2.7 and pip for python 2.7 however I'm getting a "The command '/bin/sh -c pip2 install -r requirements.txt' returned a non-zero code: 1" error when trying to build the image.
FROM colstrom/python:legacy
MAINTAINER **REDACTED**
RUN pip2 install -r requirements.txt
CMD ["python2.7", "parser.py"]
Any ideas?
You have to COPY/ADD or mount your data (at least requirements.txt and parser.py) into the container.
Assuming your Dockerfile resides at the root directory of your project:
FROM colstrom/python:legacy
MAINTAINER **REDACTED**
COPY . .
RUN pip2 install -r requirements.txt
CMD ["python2.7", "parser.py"]

Docker Image creation for aws log agent - ERROR

Hi I would like to create Docker image with aws log agent service.
following script i have written
My Dockerfile
FROM ubuntu:latest
ENV AWS_REGION ap-northeast-1
RUN apt-get update && apt-get install -y curl python python-pip \
&& rm -rf /var/lib/apt/lists/*
COPY awslogs.conf ./
RUN curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
RUN chmod +x ./awslogs-agent-setup.py
RUN ./awslogs-agent-setup.py --non-interactive --region ${AWS_REGION} --configfile ./awslogs.conf
RUN apt-get purge curl -y
RUN mkdir /var/log/awslogs
WORKDIR /var/log/awslogs
CMD /bin/sh /var/awslogs/bin/awslogs-agent-launcher.sh
*********************END OF FILE *****************************
during building image i get following error
Step 1 of 5: Installing pip ...DONE
Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ... DONE
Step 5 of 5: Setting up agent as a daemon ...Traceback (most recent call last):
File "/awslogs-agent-setup.py", line 1272, in <module>
main()
File "/awslogs-agent-setup.py", line 1268, in main
setup.setup_artifacts()
File "/awslogs-agent-setup.py", line 827, in setup_artifacts
self.setup_daemon()
File "/awslogs-agent-setup.py", line 773, in setup_daemon
self.setup_agent_nanny()
File "/awslogs-agent-setup.py", line 764, in setup_agent_nanny
self.setup_cron_jobs()
File "/awslogs-agent-setup.py", line 734, in setup_cron_jobs
with open (nanny_cron_path, "w") as cron_fragment:
IOError: [Errno 2] No such file or directory: '/etc/cron.d/awslogs'
The command '/bin/sh -c python /awslogs-agent-setup.py -n -r eu-west-1 -c ./awslogs.conf.dummy' returned a non-zero code: 1
Please help me to fix this.
You are just missing the cron package in your dockerfile. It doesn't matter if you have installed cron on your system.
FROM ubuntu:latest
ENV AWS_REGION ap-northeast-1
RUN apt-get update && apt-get install -y curl python cron python-pip \
&& rm -rf /var/lib/apt/lists/*
COPY awslogs.conf ./
RUN curl https://s3.amazonaws.com/aws-cloudwatch/downloads/latest/awslogs-agent-setup.py -O
RUN chmod +x ./awslogs-agent-setup.py
RUN ./awslogs-agent-setup.py --non-interactive --region ${AWS_REGION} --configfile ./awslogs.conf
RUN apt-get purge curl -y
RUN mkdir /var/log/awslogs
WORKDIR /var/log/awslogs
CMD /bin/sh /var/awslogs/bin/awslogs-agent-launcher.sh
After adding cron package to dockerfile everything works fine
Step 1 of 5: Installing pip ...DONE
Step 2 of 5: Downloading the latest CloudWatch Logs agent bits ... DONE
Step 5 of 5: Setting up agent as a daemon ...DONE
------------------------------------------------------
- Configuration file successfully saved at: /var/awslogs/etc/awslogs.conf
- You can begin accessing new log events after a few moments at https://console.aws.amazon.com/cloudwatch/home?region=us-west-2#logs:
- You can use 'sudo service awslogs start|stop|status|restart' to control the daemon.
- To see diagnostic information for the CloudWatch Logs Agent, see /var/log/awslogs.log
- You can rerun interactive setup using 'sudo python ./awslogs-agent-setup.py --region us-west-2 --only-generate-config'
------------------------------------------------------
Have you checked that a cron daemon is actually installed on your system ?
On other hand you can you can try manually installing pip3.5 install awscli-cwlogs or apt-get update && apt-get install -y python-pip libpython-dev
You can refer to this question