Running Cypress in Google Cloud Build - google-cloud-platform

I need to run cypress e2e tests in Google Cloud Build. I get the error that I need to install cypresses dependencies when I just run id: End to End Test. So, I attempted to download the dependencies but this occurs:
E: Unable to locate package libasound2'
E: Unable to locate package libxss1
E: Unable to locate package libnss3
E: Unable to locate package libgconf-2-4
E: Unable to locate package libnotify-dev
E: Couldn't find any package by regex 'libgtk2.0-0'
E: Couldn't find any package by glob 'libgtk2.0-0'
E: Unable to locate package libgtk2.0-0
E: Unable to locate package xvfb
Reading state information...
Building dependency tree...
Reading package lists...
Status: Downloaded newer image for ubuntu:latest
Digest: sha256:eb70667a801686f914408558660da753cde27192cd036148e58258819b927395
latest: Pulling from library/ubuntu
Using default tag: latest
Pulling image: ubuntu
How can I run cypress in Google Cloud Build?
cloudbuild.yaml
steps:
... npm setup ...
- name: 'ubuntu'
id: Install Cypress Dependencies
args:
[
'apt-get',
'install',
'xvfb',
'libgtk2.0-0',
'libnotify-dev',
'libgconf-2-4',
'libnss3',
'libxss1',
libasound2',
]
- name: 'gcr.io/cloud-builders/npm:current'
id: End to End Test
args: ['run', 'e2e:gcb']

So the issue with what you have is that steps are meant to be isolated from one another. running apt-get update works but does not persist when you attempt to apt-get install the required dependencies. Only data in the project directory (which defaults to /workspace) is persisted between steps.
Rather than trying to figure out a workaround with that, I was able to successfully get Cypress to run in Google Cloud Build by using the Cypress Docker image. One thing to note is that you will also have to cache the Cypress install inside the workspace folder during the npm install step. You'll also probably want to add the .tmp directory to your .gcloudignore
- name: node
id: Install Dependencies
entrypoint: yarn
args: ['install']
env:
- 'CYPRESS_CACHE_FOLDER=/workspace/.tmp/Cypress'
And then you can run the tests like so
- name: docker
id: Run E2Es
args:
[
'run',
'--workdir',
'/e2e',
'--volume',
'/workspace:/e2e',
'--env',
'CYPRESS_CACHE_FOLDER=/e2e/.tmp/Cypress',
'--ipc',
'host',
'cypress/included:3.2.0'
]
Or, if you want to run a custom command rather than the default cypress run, you can do
- name: docker
id: Run E2Es
args:
[
'run',
'--entrypoint',
'yarn',
'--workdir',
'/e2e',
'--volume',
'/workspace:/e2e',
'--env',
'CYPRESS_CACHE_FOLDER=/e2e/.tmp/Cypress',
'--ipc',
'host',
'cypress/included:3.2.0',
'e2e',
]
Let's break this down....
name: docker tells Cloud Build to use the Docker Cloud Builder
--workdir /e2e tells docker to use a /e2e directory in the container during the run
--volume /workspace:/e2e points the /e2e working directory used by docker to the /workspace working directory used by cloud build
--env CYPRESS_CACHE_FOLDER=/e2e/.tmp/Cypress tells cypress to point at /e2e/.tmp/Cypress for the Cypress cache.
--ipc host fixes issues with Cypress crashing during the test run
cypress/included:3.2.0 the Cypress Docker image which includes cypress and the browsers
And if you are running your own script:
--entrypoint yarn overrides the default entrypoint in the cypress/included Dockerfile (which, remember, is cypress run)
e2e is the script you'd like to run to run e2es
Hope this helps! I spent over a week trying to get this to work so I figured I'd help out anyone else facing the same issue :)

running cypress in google cloud build works (now) fine with:
steps:
# install dependencies
- id: install-dependencies
name: node
entrypoint: yarn
args: ['install']
env:
- 'CYPRESS_CACHE_FOLDER=/workspace/.tmp/Cypress'
# run cypress
- id: run-cypress
name: cypress/included:7.0.1
entrypoint: yarn
args: ['run', 'vue-cli-service', 'test:e2e', '--headless']
env:
- 'CYPRESS_CACHE_FOLDER=/workspace/.tmp/Cypress'
options:
machineType: 'E2_HIGHCPU_8'
Note:
There is no cypress/included:latest tag, therefore the tag needs to be kept updated
uses E2_HIGHCPU_8 machine type, as the default only provides 1 vCPU and 4GB
Example args are for vue, but anything supported by the cypress/included image can be executed

Not familiar with GCB, but you probably need to do apt-get update before you can do apt-get install, try:
steps:
... npm setup ...
- name: 'ubuntu'
id: Update apt index
args:
[
'apt-get',
'update',
]
- name: 'ubuntu'
id: Install Cypress Dependencies
args:
[
'apt-get',
'install',
'xvfb',
'libgtk2.0-0',
'libnotify-dev',
'libgconf-2-4',
'libnss3',
'libxss1',
'libasound2',
]
- name: 'gcr.io/cloud-builders/npm:current'
id: End to End Test
args: ['run', 'e2e:gcb']
Also, note that you have a typo on libasound2' :)

Related

Getting error "Already have image (with digest): gcr.io/cloud-builders/docker" while trying Gitlab CICD

I am trying to use Gitlab CI/CD with Cloud Build and Cloud Run to deploy a Flask application.
I am getting an error
starting build "Edited"
FETCHSOURCE
Fetching storage object: gs://Edited
Copying gs://Edited
\ [1 files][ 2.1 GiB/ 2.1 GiB] 43.5 MiB/s
Operation completed over 1 objects/2.1 GiB.
BUILD
Starting Step #0
Step #0: Already have image (with digest): gcr.io/cloud-builders/docker
Step #0: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #0
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
--------------------------------------------------------------------------------
Cleaning up project directory and file based variables 00:00
ERROR: Job failed: exit code 1
My .gitlab-ci.yml
image: aft/ubuntu-py-dvc
stages:
- deploy
deploy:
stage: deploy
tags:
- fts-cicd
image: aft/ubuntu-py-gcloudsdk-dvc
services:
- docker:dind
script:
- echo $dvc > CI_PIPELINE_ID.json
- echo $GCP_LOGIN > gcloud-service-key.json
- dvc remote modify --local view-model-weights credentialpath CI_PIPELINE_ID.json
- dvc pull
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
cloudbuild.yaml
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/fts-im', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/fts-im']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'fts_im', '--image', 'gcr.io/$PROJECT_ID/fts_im', "--platform", "managed", "--region","asia-northeast1", "--port", "8000","--memory", "7G", "--cpu", "2", "--allow-unauthenticated"]
images:
- gcr.io/$PROJECT_ID/fts-im
Dockerfile
FROM python:3.9.16-slim
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
ADD . /app
COPY .* app/
WORKDIR /app
ADD . .secrets
COPY CI_PIPELINE_ID.json .secrets/CI_PIPELINE_ID.json
RUN ls -la .
RUN ls -la data/
RUN pwd
RUN ls -la .secrets
RUN pip install -r requirements.txt
CMD ["gunicorn" , "-b", "0.0.0.0:8000", "wsgi:app"]
Trying other solutions I tried to prune dockers from the VM which was used for Runner in the CICD settings, I have experimented from a test repo and it worked completely, I am getting this error while replicating it on a new repo. with changed the name to fts_im.
I haven't deleted the previous build and deployed app from cloud build and cloud run, because while using the previous repo I experimented build multiple time all successful.
As per this document Dockerfile should present in the same directory where the build config file is,
Run below command check if Dockerfile present in current directory or not
docker build -t docker-whale
If Dockerfile is present in the same directory where the build config file is, then review this documentation to ensure the correct working directory has been set in the build config file.
Make sure GitLab CI/CD is set up correctly and configured to run on the current branch.
Also you have to specify the full path of the Dockerfile in cloudbuild.yaml file
The name of the file should be Dockerfile and not **.**Dockerfile. The file should not have any extension. check the Dockerfile is named correctly .
Check you have not misspelled image name, I can see 2 different image names gcr.io/$PROJECT_ID/fts-im and gcr.io/$PROJECT_ID/fts_im, I’m not sure whether they are 2 different image or you misplaced _(underscore) with -(Hyphen).

Google Cloud Run inaccessible even on successful build

My Google Cloud Run image was build successfully using Cloud Build via Github repo. I don't see anything concerning in the build logs.
This is my Dockerfile:
# Use the official lightweight Node.js 10 image.
# https://hub.docker.com/_/node
FROM node:17-slim
RUN set -ex; \
apt-get -y update; \
apt-get -y install ghostscript; \
apt-get -y install pngquant; \
rm -rf /var/lib/apt/lists/*
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install dependencies.
# If you add a package-lock.json speed your build by switching to 'npm ci'.
RUN npm ci --only=production
# RUN npm install --production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
CMD [ "npm", "start" ]
But when I try to access the cloud through the public URL I see:
Oops, something went wrong…
Continuous deployment has been set up, but your repository has failed to build and deploy.
This revision is a placeholder until your code successfully builds and deploys to the Cloud Run service myapi in asia-east1 of the GCP project myproject.
What's next?
From the Cloud Run service page, click "Build History".
Examine your build logs to understand why it failed.
Fix the issue in your code or Dockerfile (if any).
Commit and push the change to your repository.
It appears that the node app did not run. What am I doing wrong?
Turns out that cloudbuild.yaml is not really optional. Adding the file with the following resolved the issue:
steps:
# Build the container image
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA", "."]
# Push the container image to Container Registry
- name: "gcr.io/cloud-builders/docker"
args: ["push", "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"]
# Deploy container image to Cloud Run
- name: "gcr.io/google.com/cloudsdktool/cloud-sdk"
entrypoint: gcloud
args:
- "run"
- "deploy"
- "myapi"
- "--image"
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"
- "--region"
- "asia-east1"
images:
- "gcr.io/$PROJECT_ID/myapi:$COMMIT_SHA"

Getting error when trying to execute cloud build to deploy application to cloud run

I tried to deploy application to cloud run in GCP which succesfuly got executed using docker file.Now,I am setting up CI/CD by using cloudbuild.yaml .I mirrored repo to CSR and created a cloudbuild service and placed cloudbuild.yaml in my repository .When executed from cloudbuild,it throws the following error.
Status: Downloaded newer image for gcr.io/google.com/cloudsdktool/cloud-sdk:latest
gcr.io/google.com/cloudsdktool/cloud-sdk:latest
Deploying...
Creating Revision...failed
Deployment failedERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable.
Docker file is attached below:
#pulls python 3.7’s image from the docker hub
FROM python:alpine3.7
#copies the flask app into the container
COPY . /app
#sets the working directory
WORKDIR /app
#install each library written in requirements.txt
RUN pip install -r requirements.txt
#exposes port 8080
EXPOSE 8080
#Entrypoint and CMD together just execute the command
#python app.py which runs this file
ENTRYPOINT [ "python" ]
CMD [ "app.py" ]
cloudbuild.yaml:
steps:
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/projectid/servicename', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/projectid/servicename']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args:
- 'run'
- 'deploy'
- 'phase-2'
- '--image'
- 'gcr.io/projectid/servicename'
- '--region'
- 'us-central1'
- '--platform'
- 'managed'
images:
- 'gcr.io/projectid/servicename'.
OP got the issue resolved as seen in the comments:
Got the issue resolved.It was because of the python compatibility issue.I should use pip3 and python3 in the docker file.I think gcr.io image is compatible with python3.

Return code: 1 Output: Dockerfile and Dockerrun.aws.json are both missing, abort deployment

I have set up a CI/CD pipeline using Travis CI so that when i push the code to it automatically gets deployed to AWS beanstalk.
I am using docker as a platform in AWS.
When i push the code it passes through travis but aws shows the error "Command failed on instance. Return code: 1 Output: Dockerfile and Dockerrun.aws.json are both missing, abort deployment."
I don't need dockerrun.aws.json as i am using a local docker image
But not able to figure out why is this error being shown as there is a docker file.
Travis file
sudo: required
language: node_js
node_js:
- "10.16.0"
sudo: true
addons:
chrome: stable
branches:
only:
- master
before_script:
- npm install -g #angular/cli
script:
- ng test --watch=false --browsers=ChromeHeadless
deploy:
provider: elasticbeanstalk
access_key_id:
secure: "$accesskey"
secret_access_key:
secure: "$AWS_SECRET_KEY"
region: "us-east-2"
app: "portfolio"
env: "portfolio-env"
bucket_name: "elasticbeanstalk-us-east-2-646900675324"
bucket_path: "portfolio"
Dockerfile
FROM node:12.7.0-alpine as builder
WORKDIR /src/app
COPY package.json .
RUN npm install
COPY . .
RUN npm run build
# To copy the files from build folder to directory where nginx could serve up the files
FROM nginx
EXPOSE 80
COPY --from=builder /src/app/dist/portfio /usr/share/nginx/html
Any possible solution for this one ?
I had the same issue. Turns out my dockerfile was not capitalized, and AWS is case sensitive. When I changed the file name to "Dockerfile", everything worked as expected.

circleci 2.0 can't find awscli

I'm using circleCI 2.0 and they can't find aws but their documents clearly say that aws is installed in default
when I use this circle.yml
version: 2
jobs:
build:
working_directory: ~/rian
docker:
- image: node:boron
steps:
- checkout
- run:
name: Pre-Dependencies
command: mkdir ~/rian/artifacts
- restore_cache:
keys:
- rian-{{ .Branch }}-{{ checksum "yarn.lock" }}
- rian-{{ .Branch }}
- rian-master
- run:
name: Install Dependencies
command: yarn install
- run:
name: Test
command: |
node -v
yarn run test:ci
- save_cache:
key: rian-{{ .Branch }}-{{ checksum "yarn.lock" }}
paths:
- "~/.cache/yarn"
- store_artifacts:
path: ~/rian/artifacts
destination: prefix
- store_test_results:
path: ~/rian/test-results
- deploy:
command: aws s3 sync ~/rian s3://rian-s3-dev/ --delete
following error occurs:
/bin/bash: aws: command not found
Exited with code 127
so if I edit the code this way
- deploy:
command: |
apt-get install awscli
aws s3 sync ~/rian s3://rian-s3-dev/ --delete
then i get another kind of error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package awscli
Exited with code 100
Anyone knows how to fix this???
The document you are reading is for CircleCI 1.0 and for 2.0 is here:
https://circleci.com/docs/2.0/
In CircleCI 2.0, you can use your favorite Docker image. The image you are currently setting is node:boron, which does not include the aws command.
https://hub.docker.com/_/node/
https://github.com/nodejs/docker-node/blob/14681db8e89c0493e8af20657883fa21488a7766/6.10/Dockerfile
If you just want to make it work for now, you can install the aws command yourself in circle.yml.
apt-get update && apt-get install -y awscli
However, to take full advantage of Docker's benefits, it is recommended that you build a custom Docker image that contains the necessary dependencies such as the aws command.
You can write your custom aws-cli Docker image something like this:
FROM circleci/python:3.7-stretch
ENV AWS_CLI_VERSION=1.16.138
RUN sudo pip install awscli==${AWS_CLI_VERSION}
I hit this issue when deploying to AWS lambda functions and pushing files to S3 bucket. Finally solved it and then built a docker image to save time in installing the AWS CLI every time. Here is a link to the image and the repo!
https://github.com/wilson208/circleci-awscli
https://hub.docker.com/r/wilson208/circleci-awscli/
Fire a PR in or open an issue if you need anything added to the image and I will get to it when I can.
Edit:
Also, checkout the readme on github for examples of deploying a package to Lambda or pushing files to S3