Two part question, would really appreciate help on either part. I'm attempting to install Anaconda followed by numbapro on AWS EB. My options.config in .ebextensions looks like this:
commands:
00_download_conda:
command: 'wget http://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda2-4.3.0-Linux-x86_64.sh'
command: echo 'Finished installing Anaconda'
test: test ! -d /anaconda
02_install_cuda:
command: 'export PATH=$PATH:$HOME/anaconda2/bin'
command: echo 'About to install numbapro'
command: 'conda install -c anaconda numbapro'
Whenever I attempt to deploy this I run into a timeout and when I try and manually stop the current processes from the console I get an error saying that the environment is not in a state where I can abort the current operation or view any log files.
There are a couple of problems here.
First, you need to make sure that you're properly indenting your YAML file, as YAML is sensitive to whitespace. Your file should look like this:
commands:
00_download_conda:
command: 'wget http://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda2-4.3.0-Linux-x86_64.sh'
...
Next, you can only have one command: entry per command. The echo commands aren't particularly valuable, as you can see what commands are being executed by looking at /var/log/eb-activity.log. You can also combine the export PATH line with conda install something like this:
PATH=$PATH:$HOME/anaconda2/bin conda install -c anaconda numbapro
If you're still having trouble after you clear up those items, check (or post here) eb-activity.log to see what's going on.
Refer to the documentation for more details.
Related
This is my 3rd day of tear-your-hair-out since the weekend and I just cannot get ENTRYPOINT to work via gitlab runner 13.3.1, this for something that previously worked with a simple ENTRYPOINT ["/bin/bash"] but that was using local docker desktop and using docker run followed by docker exec commands which worked like a synch. Essentially, at the end of it all I previously got a WAR file built.
Currently I build my container in gitlab runner 13.3.1 and push to s3 bucket and then use the IMAGE:localhost:500/my-recently-builtcontainer and then try and do whatever it is I want with the container but I cannot even get ENTRYPOINT to work, in it's exec form or in shell form - atleast in the shell form I get to see something. In the exec form it just gave "OCI runtime create failed" opaque errors so I shifted to the shell form just to see where I could get to.
I keep getting
sh: 1: sh: echo HOME=/home/nonroot-user params=#$ pwd=/ whoami=nonroot-user script=sh ENTRYPOINT reached which_sh=/bin/sh which_bash=/bin/bash PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin; ls -alrth /bin/bash; ls -alrth /bin/sh; /usr/local/bin/entrypoint.sh ;: not found
In my Dockerfile I distinctly have
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
RUN bash -c "ls -larth /usr/local/bin/entrypoint.sh"
ENTRYPOINT "echo HOME=${HOME} params=#$ pwd=`pwd` whoami=`whoami` script=${0} ENTRYPOINT reached which_sh=`which sh` which_bash=`which bash` PATH=${PATH}; ls -alrth `which bash`; ls -alrth `which sh`; /usr/local/bin/lse-entrypoint.sh ;"
The output after I build the container in gitlab is - and I made sure anyone has rights to see this file and use it - just so that I can proceed with my work
-rwxrwxrwx 1 root root 512 Apr 11 17:40 /usr/local/bin/entrypoint.sh
So, I know it is there and all the chmod flags indicate anybody can look at it - so I am so perplexed why it is saying NOT FOUND
/usr/local/bin/entrypoint.sh ;: not found
entrypoint.sh is ...
#!/bin/sh
export PATH=$PATH:/usr/local/bin/
clear
echo Script is $0
echo numOfArgs is $#
echo paramtrsPassd is $#
echo whoami is `whoami`
bash --version
echo "About to exec ....."
exec "$#"
It does not even reach inside this entrypoint.sh file.
I have a Django application which it's deployed to Amazon Elastic Beanstalk. I have to install anaconda for installing pythonocc-core package. I have created a .config file in .ebextensions folder and add the anaconda path in my wsgi.py file such as below and I have deployed it successfully.
.config file:
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_conda_activate_installation:
command: 'source ~/.bashrc'
wsgi.py:
sys.path.append('/anaconda/lib/python3.7/site-packages')
However when I add the 04_conda_install_pythonocc command below to the continuation of this .config file, I got command failed error.
04_conda_install_pythonocc:
command: 'conda install -c dlr-sc pythonocc-core=7.4.0'
I ssh into the instance for checking. I saw the /anaconda folder has occured. When I checked with the conda --version command, I got the -bash: conda: command not found error.
Afterwards, I thought there might be a problem with the PATH and I edited the .config file as follows and I have deployed this .config file successfully.
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_add_path:
command: 'export PATH=$PATH:$HOME/anaconda/bin'
04_conda_activate_installation:
command: 'source ~/.bashrc'
But when I add the conda_install_pythonocc command again to the continuation of this edited version of .config file, it failed again and I got command failed.
In manually, all the commands work but they don't work in my .config file.
How can I fix this issue and install package with conda?
I tried to replicated the issue on my sandbox account, and I successful installed conda using the following (simplified) config file on 64bit Amazon Linux 2 v3.0.3 running Python 3.7:
.ebextensions/60_anaconda.config
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
05_conda_install:
command: '/anaconda/bin/conda install -y -c dlr-sc pythonocc-core=7.4.0'
Note the use off absolute path /anaconda/bin/conda and -y to not ask for manual confirmations. I only verified installation procedure, not how to use it afterwards (e.g. not how to use it in python application). Thus you will probably need to adjust it to your needs.
The EB log file showing successful installation is also provided for your reference (shortened for simplicity):
/var/log/cfn-init-cmd.log
I have installed sam using following
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html
I can run following
sam build
But not
sudo sam build
which gives me => sudo: sam: command not found
Further going I have found that I need to sudo permission to sudo as follows.
sudo env "PATH=/home/linuxbrew/.linuxbrew/bin/sam" sam
Is the above correct? I haven't run this command and not sure if it is proper.
This is what I have run.
test -d ~/.linuxbrew && eval $(~/.linuxbrew/bin/brew shellenv)
test -d /home/linuxbrew/.linuxbrew && eval $(/home/linuxbrew/.linuxbrew/bin/brew shellenv)
test -r ~/.bash_profile && echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.bash_profile
echo "eval \$($(brew --prefix)/bin/brew shellenv)" >>~/.profile
You can try this:
In a normal terminal (normal user):
which sam
This will give you the location, where sam is installed, let's say /somewhere/bin/sam.
Then: try:
sudo /somewhere/bin/sam build
if you followed the tutorial about Linux+Brew+SAM install, maybe you forgot to run the command:
brew install aws-sam-cli
Or just make an alias to the command
nano ~/.bashrc
add row at the end
alias sam='/home/linuxbrew/.linuxbrew/bin/sam'
Save. Restart terminal.
pip3 install aws-sam-cli
this worked for me.
Run the below command after following the official aws sam cli installation tutorial
$ brew install aws-sam-cli
==> Installing aws-sam-cli from aws/tap
==> Downloading https://github.com/aws/aws-sam-🍺
...
/home/linuxbrew/.linuxbrew/Cellar/aws-sam-cli/1.13.2: 3,899 files, 91MB
At the end it will show where it would be installed.
for me it is the path
/home/linuxbrew/.linuxbrew/Cellar/aws-sam-cli/1.13.2/libexec/bin/sam
then make a symbolic link
$ ln -s /home/linuxbrew/.linuxbrew/Cellar/aws-sam-cli/1.13.2/libexec/bin/sam /home/linuxbrew/.linuxbrew/bin/sam
now you will be able to call sam easily
$ sam --version
SAM CLI, version 1.13.2
Is there a way to drop root user on AWS CodeBuild?
We are building a Yocto project that fails on CodeBuild if we're root (Bitbake sanity check).
Our desperate approach doesn't work either:
...
build:
commands:
- chmod -R 777 $(pwd)/ && chown -R builder $(pwd)/ && su -c "$(pwd)/make.sh" -s /bin/bash builder
...
Fails with:
bash: /codebuild/output/src624711770/src/.../make.sh: Permission denied
Any idea how we could run this a non-root?
I am succeeded in using non-root user in AWS CodeBuild.
It takes much more than knowing some CodeBuild options to come up with a practical solution.
Everyone should spot run-as option quite easily.
The next question is "which user?"; you cannot just put any word as a username.
In order to find out which users are available, the next clue is at Docker images provided by CodeBuild section. There, you'll find a link to each image definition.
For me, the link leads me to this page on GitHub
After inspecting the source code of Dockerfile, we'll know that there is a user called codebuild-user available. And we can use this codebuild-user for our run-as in the buildspec.
Then we'll face with a whole lot of other problems because the standard image only installs runtime of each language for root only.
This is as far as generic explanations can go.
For me, I wanted to use the Ruby runtime, so my only concern is the Ruby runtime.
If you use CodeBuild for something else, you are on your own now.
In order to utilize Ruby runtime as codebuild-user, we have to expose them from the root user. To do that, I change the required permissions and owner of .rbenv used by the CodeBuild image with the following command.
chmod +x ~
chown -R codebuild-user:codebuild-user ~/.rbenv
The bundler (Ruby's dependency management tool) still wants to access the home directory, which is not writable. We have to set up an environment variable to make it use other writable location as the home directory.
The environment variable is BUNDLE_USER_HOME.
Put everything together; my buildspec looks like:
version: 0.2
env:
variables:
RAILS_ENV: test
BUNDLE_USER_HOME: /tmp/bundle-user
BUNDLE_SILENCE_ROOT_WARNING: true
run-as: codebuild-user
phases:
install:
runtime-versions:
ruby: 2.x
run-as: root
commands:
- chmod +x ~
- chown -R codebuild-user:codebuild-user ~/.rbenv
- bundle config set path 'vendor/bundle'
- bundle install
build:
commands:
- bundle exec rails spec
cache:
paths:
- vendor/bundle/**/*
My points are:
It is, indeed, possible.
Show how I did it for my use case.
Thank you for this feature request. Currently you cannot run as a non-root user in CodeBuild, I have passed it to the team for further review. Your feedback is very much appreciated.
To run CodeBuild as non root you need to specify a Linux username using the run-as tag in your buildspec.yaml as shown in the docs
version: 0.2
run-as: Linux-user-name
env:
variables:
key: "value"
key: "value"
parameter-store:
key: "value"
key: "value"
phases:
install:
run-as: Linux-user-name
runtime-versions:
runtime: version
What we ended up doing was the following:
Create a Dockerfile which contains all the stuff to build a Yocto / Bitbake project in which we ADD the required sources and create an user builder which we use to build our project.
FROM ubuntu:16.04
RUN apt-get update && apt-get -y upgrade
# Required Packages for the Host Development System
RUN apt-get install -y gawk wget git-core diffstat unzip texinfo gcc-multilib \
build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
xz-utils debianutils iputils-ping vim
# Additional host packages required by poky/scripts/wic
RUN apt-get install -y curl dosfstools mtools parted syslinux tree
# Create a non-root user that will perform the actual build
RUN id builder 2>/dev/null || useradd --uid 30000 --create-home builder
RUN apt-get install -y sudo
RUN echo "builder ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers
# Fix error "Please use a locale setting which supports utf-8."
# See https://wiki.yoctoproject.org/wiki/TipsAndTricks/ResolvingLocaleIssues
RUN apt-get install -y locales
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
echo 'LANG="en_US.UTF-8"'>/etc/default/locale && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LANG US.UTF-8
ENV LANGUAGE en_US.UTF-8
WORKDIR /home/builder/
ADD ./ ./
USER builder
ENTRYPOINT ["/bin/bash", "-c", "./make.sh"]
We build this docker during the Codebuild pre_build step and run the actual build in the ENTRYPOINT (in make.sh) when we run the image. After the container has been excited, we copy the artifacts to the Codebuild host and put them on S3:
version: 0.2
phases:
pre_build:
commands:
- mkdir ./images
- docker build -t bob .
build:
commands:
- docker run bob:latest
post_build:
commands:
# copy the last excited container's images into host as build artifact
- docker cp $(docker container ls -a | head -2 | tail -1 | awk '{ print $1 }'):/home/builder/yocto-env/build/tmp/deploy/images ./images
- tar -cvzf artifacts.tar.gz ./images/*
artifacts:
files:
- artifacts.tar.gz
The only drawback this approach has, is the fact that we can't (easily) use Codebuild's caching functionality. But the build is sufficiently fast for us, since we do local builds during the day and basically one rebuild from scratch at night, which takes about 90 minutes (on the most powerful Codebuild instance).
Sigh, so I came across this question and I am disappointed that there is no good or simple answer to this problem. There are many, many processes that strongly discourage running as root like composer and others that will flat-out refuse like wp-cli. If you are using the Ubuntu "standard image" provided by AWS, then there appears to be an existing user in the /etc/passwd file, dockremap:x:1000:1000::/home/dockremap:/bin/sh. I think this user is for userns-remap in docker and I am not sure about it's availability. The other option that astonishingly hasn't been mentioned is running useradd -N -G users develop to create a new user in the container. It is far simpler than spinning up a custom container for something so trivial.
I'm trying to execute lein run in a Clojure Docker image out of a mounted folder that is not /, but when I try to cd into a folder, Docker complains with unable to locate cd:
docker run -v /root/chortles:/test -i jphackworth/docker-clojure cd /test && lein run
=> Unable to locate cd
How do I instruct Leiningen to run in a different folder, or tell Docker to change the directory prior to running my command?
You can use -w param for docker run. This parameter is useful for specifying working directory within container.
docker run -w /test -v /root/chortles:/test -i jphackworth/docker-clojure lein run
The best bet is to add a shell script to the docker image and call that.
Have a script called, say lein-wrapper.sh, install in in /usr/local/bin. The script should sort out the environment for leiningen and then call it. Something like this:
#!/bin/sh
export PATH=${LEININGEN_INSTALL}:${PATH}
cd /test
lein $#
You can set
ENTRYPOINT["/usr/local/bin/lein-wrapper.sh"]
In your Dockerfile
And invoke it as:
# Will call /usr/local/bin/lein-wrapper.sh run
# which will call lein run
docker run -v /root/chortles:/test -i jphackworth/docker-clojure run
# or run lein deps...
docker run -v /root/chortles:/test -i jphackworth/docker-clojure deps
cd is a bash builtin, not a command. You can use bash -c 'cd /test && lein run'. Better yet, do as #Jiri states and use the -w parameter to set the working directory of the container, then just use lein run to start your app.