AWS s3 cp cannot find file from a gitlab build? - amazon-web-services

I'm working on a Gitlab CI project where i have to push the APK to our aws S3 Bucket for that i have specified the keys in environment variables in the project setting of our repository, now here is my gitlab-ci.yml file:
stages:
- build
- deploy
variables:
AWS_DEFAULT_REGION: us-east-2 # The region of our S3 bucket
BUCKET_NAME: abc.bycket.info # bucket name
FILE_NAME: ConfuRefac.apk
assembleDebug:
stage: build
script:
- export ANDROID_HOME=/home/bitnami/android-sdk-linux
- export ANDROID_NDK_HOME=/opt/android-ndk
- export PATH=$PATH:/home/bitnami/android-sdk-linux/platform-tools/
- export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64
- export PATH=/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:$PATH
- chmod +x ./gradlew
- ./gradlew assembleDebug
- cd app/build/outputs/apk/debug
- mv app-debug.apk ${FILE_NAME}
artifacts:
paths:
- app/build/outputs/apk/debug/${FILE_NAME}
deploys3:
image: "python:latest" # We use python because there is a well-working AWS Sdk
stage: deploy
dependencies:
- assembleDebug
script:
- pip install awscli
- cd app/build/outputs/apk/debug/
- ls && pwd
- aws s3 cp ${FILE_NAME} s3://${BUCKET_NAME}/${FILE_NAME} --recursive
So when the deploy stage starts kicking in it cannot find the file even though in ls you can clearly see that the file is indeed there.
Collecting futures<4.0.0,>=2.2.0; python_version == "2.7" (from s3transfer<0.4.0,>=0.3.0->awscli)
Using cached https://files.pythonhosted.org/packages/d8/a6/f46ae3f1da0cd4361c344888f59ec2f5785e69c872e175a748ef6071cdb5/futures-3.3.0-py2-none-any.whl
Collecting six>=1.5 (from python-dateutil<3.0.0,>=2.1->botocore==1.16.13->awscli)
Using cached https://files.pythonhosted.org/packages/65/eb/1f97cb97bfc2390a276969c6fae16075da282f5058082d4cb10c6c5c1dba/six-1.14.0-py2.py3-none-any.whl
Installing collected packages: urllib3, docutils, jmespath, six, python-dateutil, botocore, pyasn1, rsa, futures, s3transfer, PyYAML, colorama, awscli
Successfully installed PyYAML-5.3.1 awscli-1.18.63 botocore-1.16.13 colorama-0.4.3 docutils-0.15.2 futures-3.3.0 jmespath-0.10.0 pyasn1-0.4.8 python-dateutil-2.8.1 rsa-3.4.2 s3transfer-0.3.3 six-1.14.0 urllib3-1.25.9
You are using pip version 8.1.1, however version 20.1.1 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
$ cd ${PWD}/app/build/outputs/apk/debug/
$ ls && pwd
ConfuRefac.apk
/home/gitlab-runner/builds/CeGhSYCJ/0/root/confu-android/app/build/outputs/apk/debug
$ aws s3 cp ${PWD}/${FILE_NAME} s3://${BUCKET_NAME}/${FILE_NAME} --recursive
warning: Skipping file /home/gitlab-runner/builds/CeGhSYCJ/0/root/confu-android/app/build/outputs/apk/debug/ConfuRefac.apk/. File does not exist.
Running after_script
00:00
Uploading artifacts for failed job
00:01
ERROR: Job failed: exit status 1

As I indicated in the comments, the issue was caused because --recursive is treating ${FILE_NAME} as a directory, not file.
Which of course would make sense, because one can't recursively copy a single file.

Related

How to use AWS CodeArtifact *within* A Dockerfile in AWSCodeBuild

I am trying to do a pip install from codeartifact from within a dockerbuild in aws codebuild.
This article does not quite solve my problem: https://docs.aws.amazon.com/codeartifact/latest/ug/using-python-packages-in-codebuild.html
The login to AWS CodeArtifct is in the prebuild; outside of the Docker context.
But my pip install is inside my Dockerfile (we pull from a private pypi registry).
How do I do this, without doing something horrible like setting an env variable to the password derived from reading ~/.config/pip.conf/ after running the login command in prebuild?
You can use the environment
variable: PIP_INDEX_URL[1].
Below is an AWS CodeBuild buildspec.yml file where we construct the
PIP_INDEX_URL for CodeArtifact by using
this example from the AWS documentation.
buildspec.yml
pre_build:
commands:
- echo Getting CodeArtifact authorization...
- export CODEARTIFACT_AUTH_TOKEN=$(aws codeartifact get-authorization-token --domain "${CODEARTIFACT_DOMAIN}" --domain-owner "${AWS_ACCOUNT_ID}" --query authorizationToken --output text)
- export PIP_INDEX_URL="https://aws:${CODEARTIFACT_AUTH_TOKEN}#${CODEARTIFACT_DOMAIN}-${AWS_ACCOUNT_ID}.d.codeartifact.${AWS_DEFAULT_REGION}.amazonaws.com/pypi/${CODEARTIFACT_REPO}/simple/"
In your Dockerfile, add an ARG PIP_INDEX_URL line just above
your RUN pip install -r requirements.txt so it can become an environment
variable during the build process:
Dockerfile
# this needs to be added before your pip install line!
ARG PIP_INDEX_URL
RUN pip install -r requirements.txt
Finally, we build the image with the PIP_INDEX_URL build-arg.
buildspec.yml
build:
commands:
- echo Building the Docker image...
- docker build -t "${IMAGE_REPO_NAME}" --build-arg PIP_INDEX_URL .
As an aside, adding ARG PIP_INDEX_URL to your Dockerfile shouldn't break any
existing CI or workflows. If --build-arg PIP_INDEX_URL is omitted when
building an image, pip will still use the default PyPI index.
Specifying --build-arg PIP_INDEX_URL=${PIP_INDEX_URL} is valid, but
unnecessary. Specifying the argument name with no value will make Docker take
its value from the environment variable of the same
name[2].
Security note: If someone runs docker history ${IMAGE_REPO_NAME}, they can
see the value
of ${PIP_INDEX_URL}[3]
. The token is only good for a maximum of 12 hours though, and you can shorten
it to as little as 15 minutes with the --duration-seconds parameter
of aws codeartifact get-authorization-token[4],
so maybe that's acceptable. If your Dockerfile is a multi-stage build, then it
shouldn't be an issue if you're not using ARG PIP_INDEX_URL in your target
stage. docker build --secret does not seem to be supported in CodeBuild at this time.
So, here is how I solved this for now. Seems kinda hacky, but it works. (EDIT: we have since switched to #phistrom answer)
In the prebuild, I run the command and copy ~/.config/pip/pip.conf to the current build directory:
pre_build:
commands:
- echo Logging in to Amazon ECR...
...
- echo Fetching pip.conf for PYPI
- aws codeartifact --region us-east-1 login --tool pip --repository ....
- cp ~/.config/pip/pip.conf .
build:
commands:
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Then in the Dockerfile, I COPY that file in, do the pip install, then rm it
COPY requirements.txt pkg/
COPY --chown=myuser:myuser pip.conf /home/myuser/.config/pip/pip.conf
RUN pip install -r ./pkg/requirements.txt
RUN pip install ./pkg
RUN rm /home/myuser/.config/pip/pip.conf

How to transfer deployment package from S3 to EC2 instance to run python script?

AWS beginner here
I have a repo in GitLab which has a python script and a requirements.txt file, and the python script has to be deployed in the EC2 ubuntu instance (and the script has to be triggered only once a day) via Gitlab CI. I am creating a deployment package of the repo using CI and through this, I am deploying the zipped package in the S3 bucket. My .gitlab-ci.yml file:
image: ubuntu:18.04
variables:
AWS_DEFAULT_REGION: eu-central-1
GIT_SUBMODULE_STRATEGY: recursive
S3_TEST_BUCKET: $BUCKET_UNPACK
stages:
- deploy
TestJob:
stage: deploy
script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
- mv iso_forest_ad.py ~ # This is the python script
- mv requirements.txt ~
# Setup virtual environment
- mkdir ~/forEC2
- cd ~/forEC2
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forEC2/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forEC2/venv/lib/python3.7/site-packages/
- zip -r9 ~/forEC2/archive.zip .
- cd ~
- zip -g ~/forEC2/archive.zip iso_forest_ad.py
- pip install awscli --upgrade
- export PATH=$PATH:~/.local/bin
- aws configure set aws_access_key_id $AWS_TEST_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_TEST_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws s3 cp ~/forEC2/archive.zip $BUCKET_UNPACK/anomaly-detection-deployment.zip
Contents of requirements.txt
-i https://pypi.org/simple
joblib==0.16.0; python_version >= '3.6'
numpy==1.19.0
pandas==1.0.5
psycopg2-binary==2.8.5
python-dateutil==2.8.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
pytz==2020.1
scikit-learn==0.23.1
scipy==1.5.1; python_version >= '3.6'
six==1.15.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
sqlalchemy==1.3.18
threadpoolctl==2.1.0; python_version >= '3.5'
Now, I would like to transfer the script and install the dependencies in the ubuntu EC2 instance and run the script.
I know one way would be to connect to the EC2 instance and do
aws s3 sync s3://s3-bucket-name/folder /home/ubuntu
as suggested in the post: Moving files from s3 to EC2 instance. But doing this, I was not able to install the dependencies from the requirements.txt file.
I would like to know if there is an alternate way (perhaps maybe by using shell script or some other way?) for achieving this. Since I am using ubuntu locally too, using putty is not an option for me.
The link you've posted already shows one way of doing this. Namely, by using UserData.
Therefore, you would have to develop a bash script which would not only download the zip file as shown in the link, but also unpack it, and install the requirements.txt file along side with any other dependencies or configuration setup you require.
So the UserData for your instance would be something like this (pseudo-code, this is only a rough example):
#!/bin/bash
apt update
apt install -y zip awscli python3-pip # awscli is not normally on ubuntu
aws s3 sync s3://optimal-aws-nz-play-config/package.zip .
unzip package.zip
cd package
pip install -r ./requirenements.txt
If this is something you do often, you could create lunch template with the instance settings and the UserData to automatically execute these steps for each instance launched from the template.
There are also other possibilities, involving CodeDeploy, CodePipeline, but plain old UserData would be a good start.
Alternative would be to use run-command. The execution of the command would be triggered from gitlab following upload of the new s3 package.
An example of how to invoke the run-command is in the docs:
aws ssm send-command \
--document-name "AWS-RunPowerShellScript" \
--parameters commands=["echo helloWorld"] \
--targets Key=tag:Env,Values=Dev,Test
Instead of echo helloWorld you would have to write your own bash commands to be executed.

how to deploy to aws using ci/cd for zappa(python)

I'm using zappa to deploy on aws. And I wanted to implement CI/CD on AWS.
So, I created a pipeline and successfully did Aws COMMIT and AWS BUILD.
I'm unable to deploy the same using AWS CODE DEPLOY.
The Buildspec.yaml looks like this:
version: 0.2
phases:
install:
commands:
- echo Setting up virtualenv
- python -m venv venv
- source venv/bin/activate
- echo Installing requirements from file
- pip install -r requirements.txt
build:
commands:
- echo Build started on `date`
- echo Building and running tests
- python tests.py
- flask db upgrade
post_build:
commands:
- echo Build completed on `date`
- echo Starting deployment
- zappa update dev
- echo Deployment completed
How should I execute zappa deploy or zappa update on AWS?
I'm not sure how to add create appspec.yaml file.
Please HELP! Stuck!!
Here's a buildspec.yml file that I use. You could adjust this to suit your needs (for example, including the DB upgrade command).
version: 0.2
phases:
install:
commands:
- mkdir /tmp/src/
- mv $CODEBUILD_SRC_DIR/* /tmp/src/
- cd /tmp/src/
- python3 -m venv docker_env && source docker_env/bin/activate && pip install --upgrade pip==9.0.3 && pip install -r requirements.txt && zappa update production && deactivate && rm -rf docker_env
post_build:
commands:
- cd $CODEBUILD_SRC_DIR
- rm -rf /tmp/src/
- echo Build completed on `date`
Note that this is using the Docker image danielwhatmuff/zappa:python3.6 in CodeBuild. I use this image as it's based on AWS Lambda and has been tuned for Zappa.
Zappa update to Code Deploy:
Your Buildspec.yaml looks fair good but there is one important point to consider.
Postbuild will always run regardless of success/failure. Debug information can be pulled from a failed build.
Either check the reason for failure from build log, or modify your yml to look like below (caution: this is only draft change, test before using in systems):
version: 0.2
phases:
install:
commands:
- yum -y groupinstall development
- yum -y install zlib-devel
- yum -y install openssl-devel
- wget https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
- tar xJf Python-3.6.0.tar.xz
- cd Python-3.6.0
- ./configure
- make
- make install
- ln -s /usr/local/bin/python3.6 /usr/bin/python3
- curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
- python3 get-pip.py
- pip3 install virtualenv
- virtualenv -p /usr/bin/python3 venv
- source venv/bin/activate
- pip3 install -r requirements.txt
build:
commands:
- echo Build started on `date`
- echo Building and running tests
- python3 tests.py
- flask db upgrade
post_build:
commands:
- if [ $CODEBUILD_BUILD_SUCCEEDING = 1 ]; then echo Build completed on `date`; echo Starting deployment; zappa update dev; else echo Build failed ignoring deployment; fi
- echo Deployment completed
Hope it answers.
Zappa update to AWS
Below are the steps to do Zappa update on AWS
Configure AWS with IAM user
Configure AWS cli in the local host using command
a. pip install awscli
b. aws configure
Call "Zappa init", it will generate zappa_settings.json based on details provided
Zappa deploy <name provided for environment in step3>
Now your application will be deployed to AWS. Whenever you need to update call
Zappa update <name provided for environment in step3>

circleci 2.0 can't find awscli

I'm using circleCI 2.0 and they can't find aws but their documents clearly say that aws is installed in default
when I use this circle.yml
version: 2
jobs:
build:
working_directory: ~/rian
docker:
- image: node:boron
steps:
- checkout
- run:
name: Pre-Dependencies
command: mkdir ~/rian/artifacts
- restore_cache:
keys:
- rian-{{ .Branch }}-{{ checksum "yarn.lock" }}
- rian-{{ .Branch }}
- rian-master
- run:
name: Install Dependencies
command: yarn install
- run:
name: Test
command: |
node -v
yarn run test:ci
- save_cache:
key: rian-{{ .Branch }}-{{ checksum "yarn.lock" }}
paths:
- "~/.cache/yarn"
- store_artifacts:
path: ~/rian/artifacts
destination: prefix
- store_test_results:
path: ~/rian/test-results
- deploy:
command: aws s3 sync ~/rian s3://rian-s3-dev/ --delete
following error occurs:
/bin/bash: aws: command not found
Exited with code 127
so if I edit the code this way
- deploy:
command: |
apt-get install awscli
aws s3 sync ~/rian s3://rian-s3-dev/ --delete
then i get another kind of error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package awscli
Exited with code 100
Anyone knows how to fix this???
The document you are reading is for CircleCI 1.0 and for 2.0 is here:
https://circleci.com/docs/2.0/
In CircleCI 2.0, you can use your favorite Docker image. The image you are currently setting is node:boron, which does not include the aws command.
https://hub.docker.com/_/node/
https://github.com/nodejs/docker-node/blob/14681db8e89c0493e8af20657883fa21488a7766/6.10/Dockerfile
If you just want to make it work for now, you can install the aws command yourself in circle.yml.
apt-get update && apt-get install -y awscli
However, to take full advantage of Docker's benefits, it is recommended that you build a custom Docker image that contains the necessary dependencies such as the aws command.
You can write your custom aws-cli Docker image something like this:
FROM circleci/python:3.7-stretch
ENV AWS_CLI_VERSION=1.16.138
RUN sudo pip install awscli==${AWS_CLI_VERSION}
I hit this issue when deploying to AWS lambda functions and pushing files to S3 bucket. Finally solved it and then built a docker image to save time in installing the AWS CLI every time. Here is a link to the image and the repo!
https://github.com/wilson208/circleci-awscli
https://hub.docker.com/r/wilson208/circleci-awscli/
Fire a PR in or open an issue if you need anything added to the image and I will get to it when I can.
Edit:
Also, checkout the readme on github for examples of deploying a package to Lambda or pushing files to S3

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).