AWS install code deploy agent just fails - what am I missing - amazon-web-services

I follow, verbatim, the instructions given, and I get an error almost like some utterly unrelated program is being invoked. For the record, it seemed to be working yesterday.
I am running this on Amazon linux 2:
sudo yum update
sudo yum -y install ruby wget
cd /home/ec2-user
wget https://aws-codedeploy-${AWS::Region}.s3.${AWS::Region}.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
sudo service codedeploy-agent status
and here is what happens:
[root#ip-10-204-84-134 bin]# sudo ./install auto
./install: missing destination file operand after ‘auto’
Try './install --help' for more information.
[root#ip-10-204-84-134 bin]# sudo ./install --help
Usage: ./install [OPTION]... [-T] SOURCE DEST
or: ./install [OPTION]... SOURCE... DIRECTORY
or: ./install [OPTION]... -t DIRECTORY SOURCE...
or: ./install [OPTION]... -d DIRECTORY...
This install program copies files (often just compiled) into destination
locations you choose. If you want to download and install a ready-to-use
package on a GNU/Linux system, you should instead be using a package manager
like yum(1) or apt-get(1).
In the first three forms, copy SOURCE to DEST or multiple SOURCE(s) to
the existing DIRECTORY, while setting permission modes and owner/group.
In the 4th form, create all components of the given DIRECTORY(ies).
Mandatory arguments to long options are mandatory for short options too.
--backup[=CONTROL] make a backup of each existing destination file
-b like --backup but does not accept an argument
-c (ignored)
-C, --compare compare each pair of source and destination files, and
in some cases, do not modify the destination at all
-d, --directory treat all arguments as directory names; create all
components of the specified directories
-D create all leading components of DEST except the last,
then copy SOURCE to DEST
-g, --group=GROUP set group ownership, instead of process' current group
-m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x
-o, --owner=OWNER set ownership (super-user only)
-p, --preserve-timestamps apply access/modification times of SOURCE files
to corresponding destination files
-s, --strip strip symbol tables
--strip-program=PROGRAM program used to strip binaries
-S, --suffix=SUFFIX override the usual backup suffix
-t, --target-directory=DIRECTORY copy all SOURCE arguments into DIRECTORY
-T, --no-target-directory treat DEST as a normal file
-v, --verbose print the name of each directory as it is created
-P, --preserve-context preserve SELinux security context (-P deprecated)
-Z set SELinux security context of destination
file to default type
--context[=CTX] like -Z, or if CTX is specified then set the
SELinux or SMACK security context to CTX
--help display this help and exit
--version output version information and exit
The backup suffix is '~', unless set with --suffix or SIMPLE_BACKUP_SUFFIX.
The version control method may be selected via the --backup option or through
the VERSION_CONTROL environment variable. Here are the values:
none, off never make backups (even if --backup is given)
numbered, t make numbered backups
existing, nil numbered if numbered backups exist, simple otherwise
simple, never always make simple backups
GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
Report install translation bugs to <http://translationproject.org/team/>
For complete documentation, run: info coreutils 'install invocation'

Ok, The documentation is totally broken, but upon just looking in that bucket I found an RPM and ran it:
[root#ip-10-204-84-134 bin]# sudo yum install -y https://aws-codedeploy-us-east-1.s3.us-east-1.amazonaws.com/latest/codedeploy-agent.noarch.rpm^C
[root#ip-10-204-84-134 bin]# sudo service codedeploy-agent status
The AWS CodeDeploy agent is running as PID 4572
[root#ip-10-204-84-134 bin]#

Related

How to run AWS SageMaker lifecycle config scripts as a background job

I am trying to customize Amazon SageMaker Notebook Instances using Lifecycle Configurations because I need to install additional pip packages. What it means is I have to create a on-start.sh and on-create.sh script within a lifecycle configuration. You can see a sample here.
Now, I have many packages and the installation time might go over 5 minutes, causing a potential timeout. It is suggested to use nohup to run the script as a background job in that case.
But how do I run this with a nohup since I do not have a terminal in this case [see above screenshot]? Is there a way to run the script as a background job from within the script? Anything else I am missing? Please suggest
I have done this before, install many libraries for around 15 minutes. I wrapped the script I actually want to run in a create.sh and run that create.sh using nohup. Now the logs of these you can view on cloudwatch and also sagemaker start wont time out with a plus that you will have nohup.out file where you executed the nohup.
Below I wrapped script in https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/tree/master/scripts/export-to-pdf-enable into create.sh
#!/bin/bash
set -e
cat <<'EOF'>create.sh
#!/bin/bash
sudo -u ec2-user -i <<'EOF'
set -e
# OVERVIEW
# This script enables Jupyter to export a notebook directly to PDF.
# nbconvert depends on XeLaTeX and several LaTeX packages that are non-trivial to
# install because `tlmgr` is not included with the texlive packages provided by yum.
# REQUIREMENTS
# Internet access is required in on-create.sh in order to fetch the latex libraries from the ctan mirror.
sudo yum install -y texlive*
unset SUDO_UID
ln -s /home/ec2-user/SageMaker/.texmf /home/ec2-user/texmf
EOF
echo 'EOF' >> create.sh
nohup bash create.sh &

Copy .txt files from docker container to host

So I have a docker container with .txt and .csv files in it. I need to copy these to host. But I only need to copy files .txt files. The command sudo docker cp -a env1_1:/path/*.txt . does not seem to work. Is copying files of a specific types possible using docker cp? I am unable to find any alternatives for now.
Any suggestions?
Thanks.
Indeed, docker cp does not support glob patterns, and as an aside, container paths are absolute:
The docker cp command assumes container paths are relative to the container’s / (root) directory. This means supplying the initial forward slash is optional […]
Local machine paths can be an absolute or relative value. The command interprets a local machine’s relative paths as relative to the current working directory where docker cp is run.
However, one may devise a workaround, relying on docker exec, and a manually crafted shell command relying on the tar command on both sides (assuming it is available in the image):
sudo docker exec env1_1 sh -c 'cd /path && tar cf - *.txt' | tar xvf -
or if need be:
sudo docker exec env1_1 sh -c 'cd /path && tar cf - *.txt' | ( cd /dest/path && tar xvf - )
Here, the special filename - denotes STDOUT (or respectively STDIN).
Usual disclaimer: the final tar command will overwrite without further notice the selected files in the current folder (or /dest/path in the second example).

AWS CodeBuild as non-root user

Is there a way to drop root user on AWS CodeBuild?
We are building a Yocto project that fails on CodeBuild if we're root (Bitbake sanity check).
Our desperate approach doesn't work either:
...
build:
commands:
- chmod -R 777 $(pwd)/ && chown -R builder $(pwd)/ && su -c "$(pwd)/make.sh" -s /bin/bash builder
...
Fails with:
bash: /codebuild/output/src624711770/src/.../make.sh: Permission denied
Any idea how we could run this a non-root?
I am succeeded in using non-root user in AWS CodeBuild.
It takes much more than knowing some CodeBuild options to come up with a practical solution.
Everyone should spot run-as option quite easily.
The next question is "which user?"; you cannot just put any word as a username.
In order to find out which users are available, the next clue is at Docker images provided by CodeBuild section. There, you'll find a link to each image definition.
For me, the link leads me to this page on GitHub
After inspecting the source code of Dockerfile, we'll know that there is a user called codebuild-user available. And we can use this codebuild-user for our run-as in the buildspec.
Then we'll face with a whole lot of other problems because the standard image only installs runtime of each language for root only.
This is as far as generic explanations can go.
For me, I wanted to use the Ruby runtime, so my only concern is the Ruby runtime.
If you use CodeBuild for something else, you are on your own now.
In order to utilize Ruby runtime as codebuild-user, we have to expose them from the root user. To do that, I change the required permissions and owner of .rbenv used by the CodeBuild image with the following command.
chmod +x ~
chown -R codebuild-user:codebuild-user ~/.rbenv
The bundler (Ruby's dependency management tool) still wants to access the home directory, which is not writable. We have to set up an environment variable to make it use other writable location as the home directory.
The environment variable is BUNDLE_USER_HOME.
Put everything together; my buildspec looks like:
version: 0.2
env:
variables:
RAILS_ENV: test
BUNDLE_USER_HOME: /tmp/bundle-user
BUNDLE_SILENCE_ROOT_WARNING: true
run-as: codebuild-user
phases:
install:
runtime-versions:
ruby: 2.x
run-as: root
commands:
- chmod +x ~
- chown -R codebuild-user:codebuild-user ~/.rbenv
- bundle config set path 'vendor/bundle'
- bundle install
build:
commands:
- bundle exec rails spec
cache:
paths:
- vendor/bundle/**/*
My points are:
It is, indeed, possible.
Show how I did it for my use case.
Thank you for this feature request. Currently you cannot run as a non-root user in CodeBuild, I have passed it to the team for further review. Your feedback is very much appreciated.
To run CodeBuild as non root you need to specify a Linux username using the run-as tag in your buildspec.yaml as shown in the docs
version: 0.2
run-as: Linux-user-name
env:
variables:
key: "value"
key: "value"
parameter-store:
key: "value"
key: "value"
phases:
install:
run-as: Linux-user-name
runtime-versions:
runtime: version
What we ended up doing was the following:
Create a Dockerfile which contains all the stuff to build a Yocto / Bitbake project in which we ADD the required sources and create an user builder which we use to build our project.
FROM ubuntu:16.04
RUN apt-get update && apt-get -y upgrade
# Required Packages for the Host Development System
RUN apt-get install -y gawk wget git-core diffstat unzip texinfo gcc-multilib \
build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \
xz-utils debianutils iputils-ping vim
# Additional host packages required by poky/scripts/wic
RUN apt-get install -y curl dosfstools mtools parted syslinux tree
# Create a non-root user that will perform the actual build
RUN id builder 2>/dev/null || useradd --uid 30000 --create-home builder
RUN apt-get install -y sudo
RUN echo "builder ALL=(ALL) NOPASSWD: ALL" | tee -a /etc/sudoers
# Fix error "Please use a locale setting which supports utf-8."
# See https://wiki.yoctoproject.org/wiki/TipsAndTricks/ResolvingLocaleIssues
RUN apt-get install -y locales
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
echo 'LANG="en_US.UTF-8"'>/etc/default/locale && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV LANG US.UTF-8
ENV LANGUAGE en_US.UTF-8
WORKDIR /home/builder/
ADD ./ ./
USER builder
ENTRYPOINT ["/bin/bash", "-c", "./make.sh"]
We build this docker during the Codebuild pre_build step and run the actual build in the ENTRYPOINT (in make.sh) when we run the image. After the container has been excited, we copy the artifacts to the Codebuild host and put them on S3:
version: 0.2
phases:
pre_build:
commands:
- mkdir ./images
- docker build -t bob .
build:
commands:
- docker run bob:latest
post_build:
commands:
# copy the last excited container's images into host as build artifact
- docker cp $(docker container ls -a | head -2 | tail -1 | awk '{ print $1 }'):/home/builder/yocto-env/build/tmp/deploy/images ./images
- tar -cvzf artifacts.tar.gz ./images/*
artifacts:
files:
- artifacts.tar.gz
The only drawback this approach has, is the fact that we can't (easily) use Codebuild's caching functionality. But the build is sufficiently fast for us, since we do local builds during the day and basically one rebuild from scratch at night, which takes about 90 minutes (on the most powerful Codebuild instance).
Sigh, so I came across this question and I am disappointed that there is no good or simple answer to this problem. There are many, many processes that strongly discourage running as root like composer and others that will flat-out refuse like wp-cli. If you are using the Ubuntu "standard image" provided by AWS, then there appears to be an existing user in the /etc/passwd file, dockremap:x:1000:1000::/home/dockremap:/bin/sh. I think this user is for userns-remap in docker and I am not sure about it's availability. The other option that astonishingly hasn't been mentioned is running useradd -N -G users develop to create a new user in the container. It is far simpler than spinning up a custom container for something so trivial.

How to add a new group and user to that group in Photon OS

Creating a Docker image from vmware/photon:2.0
I want to run the application inside that container as a user different than root.
So, trying to create a new group and add user to it by following command:
groupadd -r new-group && useradd -r -g new-group new-user
It throws:
bash: groupadd: command not found
How can I achieve this?
You can install "shadow" package to be able to add groups/users.
tdnf -y install shadow
this is not necessary with Photon:3.0. all the prerequisite tools are ready. That could be because I installed a dep package or was just there.

Cannot Start GUIX Daemon

I've followed all the steps in the installation of GUIX at https://www.gnu.org/software/guix/manual/html_node/Binary-Installation.html#Binary-Installation but when I run
sudo ln -sf ~root/.guix-profile/lib/systemd/system/guix-daemon.service /etc/systemd/system/
sudo systemctl enable guix-daemon
it errors as
Failed to execute operation: Too many levels of symbolic links
The contents of ~root/.guix-profile/lib/systemd/system/guix-daemon.service looks correct. Only one symbolic link is involved.
What's wrong?
Update: I solved it by copying the file as
sudo cp -f ~root/.guix-profile/lib/systemd/system/guix-daemon.service /etc/systemd/system/
instead. It seems there's a limitation on the number of symbolic links.
Update: Next problem:
The command
sudo systemctl start guix-daemon
doesn't printing anything on stdout but the daemon is not created:
ps -fel|grep guix
returns nothing.