Integrating Selenium with Gitlab CI - python-2.7

I have created an automated selenium test script which works perfectly fine.
My task now is to set up Gitlab CI and try to automatically run this selenium script when I make a push to git.
Is it possible to make the selenium script automatically execute and inform the user if the script runs successfully or it fails?
Thank you

How to automatically run Automation Tests on Gitlab Ci with Selenium and specflow with a .net Project ?
If this is something ,you are looking for then .
Here is the core part which is to setup the gitlab-ci.yml file :
Here is how the sample gitlab-ci.yml should look :
image: please give your own docker which can download .net stuff
variables:
DOCKER_DRIVER: overlay2
SOURCE_CODE_DIRECTORY: 'src'
BINARIES_DIRECTORY: 'bin'
OBJECTS_DIRECTORY: 'obj'
NUGET_PACKAGES_DIRECTORY: '.nuget'
stages:
- Build
- Test
before_script:
- 'dotnet restore ${SOURCE_CODE_DIRECTORY}/TestProject.sln --packages ${NUGET_PACKAGES_DIRECTORY}'
Build:
stage: Build
script:
- 'dotnet build $SOURCE_CODE_DIRECTORY/TestProject.sln --no-restore'
except:
- tags
artifacts:
paths:
- '${SOURCE_CODE_DIRECTORY}/*/${BINARIES_DIRECTORY}'
- '${SOURCE_CODE_DIRECTORY}/*/${OBJECTS_DIRECTORY}'
- '${NUGET_PACKAGES_DIRECTORY}'
expire_in: 2 hr
Test:
stage: Test
services:
- selenium/standalone-chrome:latest
script:
- 'export MSBUILDSINGLELOADCONTEXT=1'
- 'export selenium_remote_url=http://selenium__standalone-chrome:4444/wd/hub/'
- 'export PATH=$PATH:${SOURCE_CODE_DIRECTORY}/chromedriver.exe'
- 'dotnet test $SOURCE_CODE_DIRECTORY/ExpressTestProject.sln --no-restore'
artifacts:
paths:
- '${SOURCE_CODE_DIRECTORY}/chromedriver.exe'
- '${SOURCE_CODE_DIRECTORY}/*/${BINARIES_DIRECTORY}'
- '${SOURCE_CODE_DIRECTORY}/*/${OBJECTS_DIRECTORY}'
- '${NUGET_PACKAGES_DIRECTORY}'
Thats it .When you set up your project with this .git-lab-ci.yml ,90 % of your job is done .
The tests will run automatically in Gitlab ,whenever you commit something in your source tree or Tfs.
Thanks

Related

Copying a war file from GitLab CI to Tomcat

I am trying to create a CI/CD pipeline to build war file and deploy it to Tomcat Container from GitLab.
I am using a Maven image to build the project. Once the war file is created, I would like to copy it to some folder so that from there it can be copied to the tomcat server, in a container, webapps directory.
My current approach and goal is to use a Dockerfile in my project. From a Tomcat image, run Tomcat using the project war. I tried using ADD in the Dockerfile but the directory paths where the war resides, $CI_PROJECT_DIR, is not where the Dockerfile is looking.
The following is the ".gitlab-ci.yml" file.
stages:
# Build project
- build
- package
# Build and deploy mailService
#- deploy
variables:
# Variables that can be used throughout the pipeline are defined here.
# Maven variables
MAVEN_CLI_OPTS: '-s /appdir/.m2/settings.xml'
MAVEN_PATH: '/appdir/opt/apache-maven-3.8.4/bin'
IMAGE_PATH: 'gitlab-registry.gs.mil/gteam-development/docker'
project-build:
image: ${IMAGE_PATH}/maven
services:
- tomcat:latest
stage: build
script:
- ${MAVEN_PATH}/mvn ${MAVEN_CLI_OPTS} clean package
- ls -als
- ls -als target
build docker image:
stage: package
image: docker
services:
- docker:dind
script:
- echo $CI_REGISTRY_PASSWORD | docker login -u $CI_REGISTRY_USER $CI_REGISTRY --password-stdin
- docker build -t $CI_REGISTRY_IMAGE .
- docker push $CI_REGISTRY_IMAGE
tags:
- dind
The following is the "Dockerfile" I am using to run Tomcat using the image. I need to copy my war file to the Tomcat webapps folder and build another image.
FROM tomcat:latest
LABEL maintainer=”Jacquelyne Wilson”
# ADD $CI_PROJECT/target/geoint-rfi-data-api.war /usr/local/tomcat/webapps/
EXPOSE 8080
CMD [“catalina.sh”, “run”]
NOTE: These images I have in our Gitlab Container Registry. Hope this is enough information.
This is my first experience creating Gitlab CI pipeline. My apologies if my terms and approach is not ideal.
You can provide the WAR file built in your project-build job as an artifact, so it is available in the following job. By the way, you should probably not use spaces in the job name.
This could look like the following:
project-build:
image: ${IMAGE_PATH}/maven
services:
- tomcat:latest
stage: build
artifacts:
paths:
- /path/to/your/war
script:
- ${MAVEN_PATH}/mvn ${MAVEN_CLI_OPTS} clean package
- ls -als
- ls -als target
And in the docker build job, you can then copy the WAR file from the artifact to the docker build context path so it can be ADDed to your image.
Hope this helps :)
Best regards
Andreas

Azure DevOps S3 React/MERN stack

Does anyone have any experience of using Azure DevOps to deploy React build package to AWS using their extension?
I'm stuck on uploading only the build package of npm build.
Here is my scripts so far:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: NodeTool#0
inputs:
versionSpec: '10.x'
displayName: 'Install Node.js'
- script: |
npm install
npm test
npm run build
- task: S3Upload#1
inputs:
awsCredentials: 'AWS Deploy User'
regionName: 'us-east-1'
bucketName: 'test'
globExpressions: '**'
createBucket: true
displayName: 'npm install and build'
The only options on the task for S3Upload that stands out is sourceFolder. They use something like "$(Build.ArtifactStagingDirectory)" but since I've never used that before that doesn't make a lot of sense to me. Would it just be as simple as like $(Build.ArtifactStagingDirectory)/build
The predefined variable $(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a, which is the local path on the agent where any artifacts are copied to before being pushed to their destination.
In your yaml pipeline, your source code is downloaded in folder $(Build.SourcesDirectory)(ie. c:\agent_work\1\s). And the npm commands in the script task all runs in this folder. So the npm build result is this folder $(Build.SourcesDirectory)\build (ie.c:\agent_work\1\s\build).
S3Upload task will upload file from $(Build.ArtifactStagingDirectory) by default. You can specifically point the sourceFolder attribute (default is $(Build.ArtifactStagingDirectory)) of S3Upload task to folder $(Build.SourcesDirectory)\build. See below:
- task: S3Upload#1
inputs:
awsCredentials: 'AWS Deploy User'
regionName: 'us-east-1'
bucketName: 'test'
globExpressions: '**'
createBucket: true
sourceFolder: '$(Build.SourcesDirectory)/build'
Another workaround is to use copy file task to copy the build results from $(Build.SourcesDirectory)\build to folder $(Build.ArtifactStagingDirectory). See example here.
- task: CopyFiles#2
inputs:
Contents: 'build/**' # Pull the build directory (React)
TargetFolder: '$(Build.ArtifactStagingDirectory)'

Redirect error while deploying nuxt app with aws amplify

When I try to deploy the app the build completes without errors. When I go to the link provided (https://master.d3m2wky0hslwhr.amplifyapp.com/) I get ERR_TOO_MANY_REDIRECTS .
My build config:
version: 1
frontend:
phases:
preBuild:
commands:
- npm ci
build:
commands:
- npm run build
artifacts:
# IMPORTANT - Please verify your build output directory
baseDirectory: .nuxt
files:
- '**/*'
cache:
paths:
- node_modules/**/*
Inside the app I have (index.vue, login.vue and register.vue)
I think it may be because im beeing redirected to url/index.html and that file does not exist in the proyect.
Solved it.
Change -npm run build to -npm run generate
(This creates the index.html by default and converts everything to static)
Change baseDirectory to dist

Code coverage is always unknown in GitLab

I'm trying to understand the GitLab Pipelines and after a few tries I was able to successfully automate my unit tests. Now I'm trying to add the code coverage badge into my project and/or readme file but it always seems to show unknown.
Files:
+ application
+ system
- unit-tests
- tests
UtilTest.php
autoload.php
phpunit
.gitignore
.gitlab-ci.yml
.htaccess
index.php
readme.md
.gitlab-ci.yml:
image: php:5.6
stages:
- test
app:unit-tests:
stage: test
script:
- php ./unit-tests/phpunit --bootstrap ./unit-tests/autoload.php ./unit-tests/tests
coverage: '/Code Coverage: \d+\.\d+/'
On the project's Test coverage parsing section I have this set up:
So I was able to fix this by using PHP 7.2 as the Docker image and installing xdebug on the before_script call.
.gitlab-ci.yml:
image: php:7.2
stages:
- test
before_script:
- pecl install xdebug
- docker-php-ext-enable xdebug
app:unit-tests:
stage: test
script:
- php ./unit-tests/phpunit --bootstrap ./unit-tests/autoload.php ./unit-tests/tests --coverage-text --colors=never
coverage: '/^\s*Lines:\s*\d+.\d+\%/'
I had to use PHP 7.2 because when I tried running pecl install xdebug it said it requires PHP 7. Ideally I would like to use PHP 5.6 because that's what our current server has just so the tests are on similar versions but I'll leave it as it is for now.
I had to add --coverage-text --colors=never on the script call for it to output the numbers. Then on the coverage call I changed it to '/^\s*Lines:\s*\d+.\d+\%/' which I also used under the Test coverage parsing section on the project settings.
And now the code coverage properly shows me my expected values.

Gitlab - Google compute engine Continuous delivery

What I am trying to do is to enable Continuous delivery from GitLab to my compute engine on Google Cloude. I have Ubuntu 16.04 TSL running over there. I did install all components needed to run my project like: Swift, vapor, nginx.
I have manage to install Gitlab runner as well and created a runner whcihc is accessible from my gitlab repo. Everytime I do push on master the runner triggers. What happen is a failure due to:
could not create leading directories of '/home/gitlab-runner/builds/2bbbbbd/0/Server/Packages/vapor.git': Permission denied
If I change the permissions to chmod -R 777 It will hange on running for build stage visible on gitlab pipeline.
I did something like:
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/builds
sudo chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/cache
but this haven't help, the error is same Permission denied
Below you have my .gitlab-ci.yml
before_script:
- swift --version
stages:
- build
- deploy
job_build:
stage: build
before_script:
- vapor clean
script:
- vapor build --release
only:
- master
job_run_app:
stage: deploy
script:
- echo "Deploy a API"
- vapor run --name=App --env=production
environment:
name: production
job_run_frontend:
stage: deploy
script:
- echo "Deploy a Frontend"
- vapor run --name=Frontend --env=production
environment:
name: production
But that haven't pass to next stage eg. deploy. I had waited more then 14h for that but with out result.
And... I have few more questions:
Gitlab runner creates builds under location /home/gitlab-runner/builds/ in this location every new job have own folder. for eg. /home/gitlab-runner/builds/2bbbbbd/ in which is my project and the commands are executed. So what happens when the first one is running and I do deploy new version? the ports are blocked by the first instance and so on?
If I want to enable supervisor how do I do that with this when every time I deploy folder is different?
Can anyone explain or show me or point me to tutorial how do Continuous deployment with out docker?
How to start a service using GitLab runner
Thanks to long deep search I finally found an answer! The full article can be found above.
Briefly GitLab CI documentation recommends using dpl for deployment. Gitlab runner run test and process should end. The runner is designed to kill all created processes after finishing each build. The GitLab runner is unable to perform operations outside the catalogue.