How to upload capybara screenshots from gitlab runner to the Digital ocean cache? - digital-ocean

I am running my rails tests using gitlab runner on Digital ocean servers.
I save bundler cache in the Digital ocean Spaces
Also I am using capybara-screenshot to make screenshots of the page when a test case fails.
When a test fails a screenshot is being saved to ./tmp/capybara/
Then after the end of test run the build servers are deleted and screenshots are lost which makes investigation of the test failure a lot harder.
Is there a way to upload the contents of the ./tmp/capybara/ folder to Digital ocean Spaces using the key and secret which gitlab runner uses to retrieve/upload the cache?

You can use job artifacts to save any data created by the build step. the files will be shown at right side of the build step.
in your case:
your_build_step:
...
artifacts:
paths:
- ./tmp/capybara
when: always
expire_in: 1 week
Read more about it here: https://docs.gitlab.com/ee/user/project/pipelines/job_artifacts.html

Related

Integration Testing in CICD Pipeline

I have a spring boot project[App-Server], for which I want to test.
I have created Mock Server docker image also hosted in AWS/Dockerhub for the same.
Also I have used Rest Assured for API Testing. For this also docker image is available in AWS/Dockerhub.
Now before creating docker image for App-Server, I want to perform integration testing where I want Dockerfile.test for App-Server to load and create docker image, then on jenkins I want first the App-Server docker image to load, then Mock-Server docker image to load and after that the Rest Assured to load and do the testing which can be done via mvn test. Once the test is successful, I want to create the final docker image for App-Server.
Can this be done via Jenkins or AWS.
tldr: You have to create docker images, deploy to test system, and run e.g. integration test, before creating final release version.
Detailed answer: I suggest you to get a closer look with you use-case at git branching model e.g. gitflow and CI/CD including containerization of an application.
Let's look it with the following scenario. Once you fixed e.g. a bug in release branch and pushed to git, your e.g. Release Jenkins job pulls it and build docker images with the version of e.g. release candidate v1.0.0-rc1. Then you must promote/deploy built release candidate version to your e.g. release reference system with mocking systems (e.g. you can use aws for this) as it is illustrated here, i.e. inner loop. You only created final release version of e.g. 1.0.0 when test are completed successfully and deploy to e.g. production system, i.e. Outer loop.

Strapi on Digital Ocean app not using production configs

I've installed a Strapi application using Digital Ocean's App Platform. I'm also using a managed Postgres database. I can deploy with production configurations, however, Strapi is still creating and using the default sqlite database.
I followed Strapi's deployment docs here: https://docs.strapi.io/developer-docs/latest/setup-deployment-guides/deployment/hosting-guides/digitalocean-app-platform.html#add-a-managed-database
I have set the /config/env/production/database.js and /config/env/production/server.js files. In the console log when the application is being built, it also confirms that it's being run in production mode. I'm not sure why it's ignoring the database.js file though for postgres.
You can try to explicitly set NODE_ENV=production for the container/app's environment.

Updating code on Digital Ocean via GitHub

I have a Django app on Digital Ocean https://chicagocreativesnetwork.com/ which was uploaded via GitHub.
I need to make some changes to the CSS and HTML for this app, which I am doing locally and pushing to my GitHub repository.
How do I get the pushed GitHub updates into my Digital Ocean app?
How was the app uploaded to your Digital Ocean droplet exactly? Was the repo cloned or forked to the droplet?
Read the Caution at the end first
You could always go into your droplet console move to the directory where your project is in. Then do:
git status (to see the state of your repo)
then do git fetch (to fetch the changes from your origin to your droplet repo)
do a git status again (to see how many steps your droplet is behind your remote
repo)
if you see everything is ok and it says you are '1 commit behind master'(if you are changing for the first time after deployment)
Go ahead and git pull (with github username and personal access token as password)
do a final git status it should now say 'you are up-to date with remote'
CAUTION - do not git push anything from your droplet console into your remote repo even if git status shows files ready to be staged and committed in red.
These files are local to the droplet repo and should stay as they are. Any change you make should come from -
Local changes pushed to your remote repository
Going into your droplet console and Pulling the changes into your droplet repository
The workflow is detailed more clearly in the following comment:
https://stackoverflow.com/a/42001608/2155469

Env vars and Docker differences between dev, staging, and prod

Although my specific example involves Django, Docker, and Heroku, I believe these are pretty general testing/QA questions.
I have a dockerized Django app tested in dev with Selenium confirming that my static files are being served correctly from my local folder (EXPECTED_ROOT = '/staticfiles/'). This app is deployed to Heroku and I can see (visually and in the dev tools) that the static files are being pulled in from CloudFront correctly as well. I want to formalize this with the same test I'm using in dev. My first question is related to if/how environment variables are used for tests:
Do I add for example EXPECTED_ROOT = 'https://<somehash>.cloudfront.net/' as an env var to Heroku and use it in the Selenium test?
Also, to run this test in staging I would need to install Firefox in my Docker image like I do in dev. Perhaps this is ok in staging, but in prod I believe I should be aiming for the the smallest image possible. So the question is about differences between staging and prod:
Do I keep Firefox in my staging image, run the tests, and then send
to production a replica of that Dockerfile, but now without firefox?
Any help is appreciated.
The idea of Config Var is to setup configuration variables that differ from environment to environment. Having said that you are in control of the environment and can define what you need.
I personally would use a different approach: create a test that is independent of the environment (for example instead of testing the expected root I would confirm a given DIV ID is found, or some other element).
This would be enough to confirm the test is successful and the functionality works as expected.
The production Dockerfile indeed does not need Selenium and can be different from the one from staging.

Run Performane Test on deployment server through TeamCity and Octopus deploy

Hi We are using TeamCity and OCtopus for CI/CD. We have several hundred test cases.
As set up we have build server (Machine A) and it gets deployed on server (Machine B)
We use TeamCity and and last step is Deploy step through OctopusDeploy.
We have several test case which gets executed as Pre Deploy Tests. Now I want to add few Performance test case which will run on server (Machine B). How can I do this ?
Thanks in advance
Octopus is already helping us with this , You can write a PowerShell step that can run your performane testing cases. So the actual process of performing the tests is pretty easy.Moreover if you wanna check the test results with display (viewing the test results) .In Octopus 2.0 you can “attach” files to a deployment via PowerShell which are then uploaded and available on the deployment page.
But there is a possibility where you can run the load tests from the Teamcity using TC JMeter plugin and you can Parameterise the target environment and run the load tests , Here below is the link which can help you.
https://devblog.xero.com/run-jmeter-performance-tests-on-teamcity-8315f7ccffc1
References and sources:
https://octopus.com/docs/deployment-examples/custom-scripts