I have the following code in my gitlab repo
package.json
{
...
"scripts": {
"test": "mocha --require ts-node/register --watch-extensions ts,tsx \"src/**/*.{spec,test}.{ts,tsx}\""
}
...
}
.gitlab-ci.yml
stages:
- test
test:
image: node:8
stage: test
script:
- npm install
- npm run test
test.ts
import { exec } from 'child_process';
import { promisify } from 'util';
const Exec = promisify(exec);
describe(test, async () => {
before(async () => {
// next line doesn't work in GitLab-CI
await Exec(`docker run -d --rm -p 1113:1113 -p 2113:2113 eventstore/eventstore`);
// an so on
})
});
it work well when I run "npm run test" in my local machine.
My Question is how can I run this test in Gitlab-CI?
If you try to run tests that connect to eventstore docker you can use gitlab services:
GitLab CI uses the services keyword to define what docker containers
should be linked with your base image.
first, you will need to setup docker executor
then you will be able to use eventstore as a service. here is an example with postgres. more information here.
Example:
test_server:
tags:
- docker
services:
- eventstore:latest
script:
- npm install && npm run test
Edit:
To access the service:
The default aliases for the service’s hostname are created from its image name
Or use an alias:
services:
- name: mysql:latest
alias: mysql-1
Related
Dockerfile
FROM node:lts-alpine as build-stage
ENV VUE_APP_BACKEND_SERVER=${_VUE_APP_BACKEND_SERVER}
RUN echo "server env is:"
RUN echo $VUE_APP_BACKEND_SERVER
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run gcpbuild
Cloudbuild config
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- front
- '-f'
- front/Dockerfile
- '--build-arg=ENV=$_VUE_APP_BACKEND_SERVER'
id: Build
...
...
options:
substitutionOption: ALLOW_LOOSE
substitutions:
_VUE_APP_BACKEND_SERVER: 'https://backend.url'
I have also set the variable in the substitutions in the 'Advanced' section. However during the build the echo prints a blank and the variable is not available in the app as expected.
What you need is:
FROM node:lts-alpine as build-stage
ARG VUE_APP_BACKEND_SERVER
...
Also, fix build-arg line in your cloud build config:
- '--build-arg',
- 'VUE_APP_BACKEND_SERVER=${_VUE_APP_BACKEND_SERVER}'
Check out the docs.
Read more about ARG directive in Dockerfiles.
I've deployed my Django React app previously through a dedicated server and now I am trying to achieve the same with Azure Web App function so I can use CI/CD easier. I've configured my project as below but only my django appears to deploy as I get a '404 main.js and index.css not found'.
This makes me think there is an issue with my static file configuration but I'm unsure.
.yml file:
name: Build and deploy Python app to Azure Web App - test123
on:
push:
branches:
- main
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: npm install, build, and test
run: |
npm install
npm run build --if-present
working-directory: ./frontend
- name: Set up Python version
uses: actions/setup-python#v1
with:
python-version: '3.8'
- name: Create and start virtual environment
run: |
python -m venv venv
source venv/bin/activate
- name: Install dependencies
run: |
pip install -r requirements.txt
python manage.py collectstatic --noinput
- name: Zip artifact for deployment
run: zip pythonrelease.zip ./* -r
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v2
with:
name: python-app
path: pythonrelease.zip
# Optional: Add step to run tests here (PyTest, Django test suites, etc.)
deploy:
runs-on: ubuntu-latest
needs: build
environment:
name: 'Production'
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v2
with:
name: python-app
path: .
- name: unzip artifact for deployment
run: unzip pythonrelease.zip
- name: 'Deploy to Azure Web App'
uses: azure/webapps-deploy#v2
id: deploy-to-webapp
with:
app-name: 'test123'
slot-name: 'Production'
publish-profile: ${{ secrets.secret}}
settings.py
STATIC_URL = '/static/'
STATIC_ROOT = 'staticfiles'
STATICFILES_DIRS = (
os.path.join(BASE_DIR, "static"),
)
Repo Structure:
Any advice would be greatly appreciated.
Cheers
To host static files in your web app, add the whitenoise package to requirements.txt and the configuration for it to settings.py. as mentioned here : Django Tips
requirements.txt | whitenoise==4.1.2
I am trying to deploy a flask application on aws lambda via zappa through gitlab CI. Since inline editing isn't possible via gitlab CI, I generated the zappa_settings.json file on my remote computer and I am trying to use this to do zappa deploy dev.
My zappa_settings.json file:
{
"dev": {
"app_function": "main.app",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "prices-service-",
"runtime": "python3.7",
"s3_bucket": -MY_BUCKET_NAME-
}
}
My .gitlab-ci.yml file:
image: ubuntu:18.04
stages:
- deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv zappa
deploy_job:
stage: deploy
script:
- mv requirements.txt ~
- mv zappa_settings.json ~
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
- zappa deploy dev
The CI file, upon running, gives me the following error:
Any suggestions are appreciated
zappa_settings.json is commited to the repo and not created on the fly. What is created on the fly is AWS credentials file. Values required are being read from Gitlab env vars set in the web UI of the project.
zappa_settings.json
{
"prod": {
"lambda_handler": "main.handler",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "dummy-name",
"s3_bucket": "dummy-name",
"aws_environment_variables": {
"STAGE": "prod",
"PROJECT": "dummy-name"
}
},
"dev": {
"extends": "prod",
"debug": true,
"aws_environment_variables": {
"STAGE": "dev",
"PROJECT": "dummy-name"
}
}
}
.gitlab-ci.yml
image:
python:3.6
stages:
- test
- deploy
variables:
AWS_DEFAULT_REGION: "eu-central-1"
# variables set in gitlab's web gui:
# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
before_script:
# adding pip cache
- export PIP_CACHE_DIR="/home/gitlabci/cache/pip-cache"
.zappa_virtualenv_setup_template: &zappa_virtualenv_setup
# `before_script` should not be overriden in the job that uses this template
before_script:
# creating virtualenv because zappa MUST have it and activating it
- pip install virtualenv
- virtualenv ~/zappa
- source ~/zappa/bin/activate
# installing requirements in virtualenv
- pip install -r requirements.txt
test code:
stage: test
before_script:
# installing testing requirements
- pip install -r requirements_testing.txt
script:
- py.test
test package:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: test
script:
- zappa package $ZAPPA_STAGE
deploy to production:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: deploy
environment:
name: production
script:
# creating aws credentials file
- mkdir -p ~/.aws
- echo "[default]" >> ~/.aws/credentials
- echo "aws_access_key_id = "$AWS_ACCESS_KEY_ID >> ~/.aws/credentials
- echo "aws_secret_access_key = "$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials
# try to update, if the command fails (probably not even deployed) do the initial deploy
- zappa update $ZAPPA_STAGE || zappa deploy $ZAPPA_STAGE
after_script:
- rm ~/.aws/credentials
only:
- master
I haven't used zappa in a while, but I remember that a lot of errors that were caused by bad AWS credentials, but zappa reporting something else.
I am running an elasticsearch service in docker in aws, and i have another docker container that runs a dotnet application; this is an excerpt of the docker-compose file i am using
version: '3.1'
services:
someservice:
build:
context: ./
dockerfile: Dockerfile
args:
- AWS_ACCESS_KEY_ID
- AWS_REGION
- AWS_SECRET_ACCESS_KEY
- AWS_SESSION_TOKEN
restart: always
container_name: some-server
ports:
- 8080:8080
links:
- elastic
elastic:
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
environment:
TAKE_FILE_OWNERSHIP: "true"
volumes:
- ./.data/elastic:/usr/share/elasticsearch/data
ports:
- 9200:9200
- 9300:9300
ulimits:
nofile:
soft: "65536"
hard: "65536"
and this is an excerpt of the Dockerfile
FROM microsoft/dotnet:2.2-sdk AS build-env
WORKDIR /app
WORKDIR /app
RUN mkdir /Service/
WORKDIR /app/Service/
COPY local_foler/Service/*.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY local_folder/Service/. ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM microsoft/dotnet:aspnetcore-runtime
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
ENV AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID
ENV ASPNETCORE_URLS="http://0.0.0.0:8080"
EXPOSE 8080
WORKDIR /app
COPY --from=build-env /app/Service/out .
ENTRYPOINT ["dotnet", "Service.dll"]
in the Service application i am using NEST client, something like this
var settings = new ConnectionSettings(new Uri(ELASTIC_SERVICE_URI)).DefaultIndex("some-index").EnableDebugMode();
var client = new ElasticClient(settings);
var pingResponse = client.Ping();
if (pingResponse.IsValid)
{
string searchTerm = "*";
if (searchParam != null)
{
searchTerm = "*" + WebUtility.UrlDecode(searchParam) + "*";
}
int from = (int)(fromParam == null ? 0 : fromParam);
int size = (int)(sizeParam == null ? 10 : sizeParam);
var searchResponse = client.Search<SomeType>(s => s
.From(from)
.Size(size)
.AllTypes()
.Query(q => q
.Bool(bq => bq
.Must(m => m
.QueryString(qs => qs
.Query(searchTerm)
.AnalyzeWildcard(true)
.DefaultField("*")
)
)
)
)
);
return Ok(searchResponse.Documents);
if i run the docker containers locally, i can set the ELASTIC_SERVICE_URI const to http://elastic:9200 and it works; i even can point that to the elastic service running in aws like this https://HOST.us-east-2.es.amazonaws.com
in both cases the service works fine and the data from the search is returning
but when i run the containers from aws the data is not retrieved, i just get an empty collection
what can be wrong ?
I can trigger my AWS pipeline from jenkins but I don't want to create buildspec.yaml and instead use the pipeline script which already works for jenkins.
In order to user Codebuild you need to provide the Codebuild project with a buildspec.yaml file along with your source code or incorporate the commands into the actual project.
However, I think you are interested in having the creation of the buildspec.yaml file done within the Jenkins pipeline.
Below is a snippet of a stage within a Jenkinsfile, it creates a build spec file for building docker images and then sends the contents of the workspace to a codebuild project. This uses the plugin for Codebuild.
stage('Build - Non Prod'){
String nonProductionBuildSpec = """
version: 0.1
phases:
pre_build:
commands:
- \$(aws ecr get-login --registry-ids <number> --region us-east-1)
build:
commands:
- docker build -t ces-sample-docker .
- docker tag $NAME:$TAG <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
post_build:
commands:
- docker push <account-number>.dkr.ecr.us-east-1.amazonaws.com/$NAME:$TAG
""".replace("\t"," ")
writeFile file: 'buildspec.yml', text: nonProductionBuildSpec
//Send checked out files to AWS
awsCodeBuild projectName: "<codebuild-projectname>",region: "us-east-1", sourceControlType: "jenkins"
}
I hope this gives you an idea of whats possible.
Good luck!
Patrick
You will need to write a buildspec for the commands that you want AWS CodeBuild to run. If you use the CodeBuild plugin for Jenkins, you can add that to your Jenkins pipeline and use CodeBuild as a Jenkins build slave to execute the commands in your buildspec.
See more details here: https://docs.aws.amazon.com/codebuild/latest/userguide/jenkins-plugin.html
#hynespm - excellent example mate.
Here is another one based off yours but with stripIndent() and "withAWS" to switch roles:
#!/usr/bin/env groovy
def cbResult = null
pipeline {
.
.
.
script {
echo ("app_version TestwithAWS value : " + "${app_version}")
String buildspec = """\
version: 0.2
env:
parameter-store:
TOKEN: /some/token
phases:
pre_build:
commands:
- echo "List files...."
- ls -l
- echo "TOKEN is ':' \${TOKEN}"
build:
commands:
- echo "build':' Do something here..."
- echo "\${CODEBUILD_SRC_DIR}"
- ls -l "\${CODEBUILD_SRC_DIR}"
post_build:
commands:
- pwd
- echo "postbuild':' Done..."
""".stripIndent()
withAWS(region: 'ap-southeast-2', role: 'CodeBuildWithJenkinsRole', roleAccount: '123456789123', externalId: '123456-2c1a-4367-aa09-7654321') {
sh 'aws ssm get-parameter --name "/some/token"'
try {
cbResult = awsCodeBuild projectName: 'project-lambda',
sourceControlType: 'project',
credentialsType: 'keys',
awsAccessKey: env.AWS_ACCESS_KEY_ID,
awsSecretKey: env.AWS_SECRET_ACCESS_KEY,
awsSessionToken: env.AWS_SESSION_TOKEN,
region: 'ap-southeast-2',
envVariables: '[ { GITHUB_OWNER, special }, { GITHUB_REPO, project-lambda } ]',
artifactTypeOverride: 'S3',
artifactLocationOverride: 'special-artifacts',
overrideArtifactName: 'True',
buildSpecFile: buildspec
} catch (Exception cbEx) {
cbResult = cbEx.getCodeBuildResult()
}
}
} //script
.
.
.
}