Integration Testing in CICD Pipeline - amazon-web-services

I have a spring boot project[App-Server], for which I want to test.
I have created Mock Server docker image also hosted in AWS/Dockerhub for the same.
Also I have used Rest Assured for API Testing. For this also docker image is available in AWS/Dockerhub.
Now before creating docker image for App-Server, I want to perform integration testing where I want Dockerfile.test for App-Server to load and create docker image, then on jenkins I want first the App-Server docker image to load, then Mock-Server docker image to load and after that the Rest Assured to load and do the testing which can be done via mvn test. Once the test is successful, I want to create the final docker image for App-Server.
Can this be done via Jenkins or AWS.

tldr: You have to create docker images, deploy to test system, and run e.g. integration test, before creating final release version.
Detailed answer: I suggest you to get a closer look with you use-case at git branching model e.g. gitflow and CI/CD including containerization of an application.
Let's look it with the following scenario. Once you fixed e.g. a bug in release branch and pushed to git, your e.g. Release Jenkins job pulls it and build docker images with the version of e.g. release candidate v1.0.0-rc1. Then you must promote/deploy built release candidate version to your e.g. release reference system with mocking systems (e.g. you can use aws for this) as it is illustrated here, i.e. inner loop. You only created final release version of e.g. 1.0.0 when test are completed successfully and deploy to e.g. production system, i.e. Outer loop.

Related

Is docker best just for prod environments?

I just decided to jump into using docker to test out building a microservice application using AWS fargate.
My question really relates to hearing about many development teams using Docker to avoid people saying the phrase "works on my machine" when committing code. Although I see the solution to that problem being solved, I still do not see how Docker images actually can be used in development environment.
The workflow for anything above production baffles me. Example of my thinking is...
team of 10 devs all use docker, each pull the image from the repo to there container, with the source code, if they all have a individual version of the image, that means any edits they make to that image is their own and when they push back to the repo where none of the edits can be merged (along with that to edit a image source code is not easily done as well).
I am thinking of it in the say way as git -GitHub, where code is pushed to a branch and then merged to master to create a finished product.
I guess if you pull the code from the GitHub master and create the Docker image is the way for it to be used, but again that points back to my original assumption of Docker being used for Production environments over development.
Is docker being used in development, more so the dev can just test the feature on the container that ever other dev on the team is using so all the environments match across the team?
I just really do not understand the workflow of development environments with docker.
I'd highlight three cases where I've found Docker particularly useful, prior to a production deploy:
Docker is really useful for installing local dependencies. If your application needs a database, docker run postgresql with appropriate options. Need a clean start? Delete the container. Running two microservices that need separate databases? Start two containers. The second microservice is maintained by another team? Run it in a container too.
Docker is useful for capturing the build environment in the CI system. Jenkins, for example, can run build steps inside a container, bind-mounting the current work tree in, so it's useful to build an image that just contains build-time dependencies (which can be updated independently of the CI system itself).
If you're running Docker in production, you can test the exact thing you're about to run. You're guaranteed the install environment will be the same in the QA and prod environments, because it's encapsulated inside the same Docker image. A developer can debug problems against the production-installed code without actually being in production.
In the basic scenario you describe, an important detail to note is that you never "edit an image"; you always docker build a new image from its Dockerfile and other source code. In compiled languages (C++, Go, Java, Rust, Haskell) the source code won't be in the image. Even if you're "using Docker in development" the actual source code will be in some other system (frequently Git), and typically you will have a CI system that builds "official" images from that source code.
Where I see Docker proposed for day-to-day development, it's either because the language ecosystem in use makes it hard to have multiple versions concurrently installed, or to avoid installing software on the host system. You need specific tooling support to "develop inside a container", and if developers choose their own IDE, this support is not universal. Conversely, in between OS package managers (APT, Homebrew) and interpreter version managers (rbenv, nvm) it's usually straightforward to install a couple of things on the host. If your application isn't that sensitive to, say, the specific version of Node, it's probably easier to use whichever version is already installed on your host than to try to insert Docker into the process.

Continuously develop and deploy a Django app with Visual Studio Code and Docker

I am developing a Django app locally with Visual Studio Code. In preparation for deployment I "dockerized" everything and now I am already able to run this container locally.
Before I try to build my Docker image somewhere else (I have Google Cloud Run in mind), I want to make sure that I still can debug my code.
With the official 'Python in a container' tutorial I am able to set breakpoints and so on when my app runs locally with Docker.
So I think the workflow will look like this:
I develop my app locally and debug it within Visual Studio Code.
For further debugging I can do this locally with Docker as described above.
When everything looks good I push this container to Google Cloud Run or whatever.
Does that sound like a reasonable plan or did I miss something important? In the end, I am looking for an easy convenient way to continuously develop (and debug) a Django app with Visual Studio Code and deploy it with Docker.
I've never used Google Cloud Run or smth, but based on experience with remote servers I can advice following approach. You can use github actions and docker hub. Cover your application or at least critical parts of it with tests which will ensure that everything important works properly. You can set github actions up the way that your tests will be run everytime you push to your github repo. If tests will be passed an image of your application (usually it's name is your_app:latest) will be updated on dockerhub allowing you to build from an image. It's a good practice to have multiple images. For example you could have a stable version, say v.1.0 and a beta version your_app:latest. Thus you will be able to run your stable version on a production server, while beta version can be run on a development server. Do not update stable versions, release new ones and keep existing ones.
An example of how github actions file can look like:
name: your_app_workflow
on: [push]
jobs:
tests:
# run your tests here
push_to_docker_hub:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
needs: tests
steps:
- name: Check out the repo
uses: actions/checkout#v2
- name: Push to Docker Hub
uses: docker/build-push-action#v1
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
repository: your_repository_on_dockerhub
tag_with_ref: true
Maybe you know following, but I will mention it anyway. Django built in database is SQLite which is not reliable at all, thus if you are going to let others use your product, you MUST think of another database. Current standard in web industry is PostgreSQL. There are Mongo, Redis and others, but PostgreSQL is used the most. Also, Django doesn't serve static and media files in production, so you will have to use proxy server such as Nginx. Nginx can not work with your Django app directly thus you will need an intermediary such as Gunicorn. Again, I don't know about Google Cloud Run but on a typical remote server you would do it this way.

Build and Release Ember App to Azure Service Fabric

currently our process works, but it takes too much time due that the fronend Ember app needs to be build into every single environment we have ( 5 environments ). because we never know which environment will be available when we release it.
we intend to add even more environments because every developer should have his own working development environment. (because of the backend)
how we do it, is that we create a frontend build and a backend build which creates artifacts.
now the frontent build takes around 2 minutes for every environment.
ember build --env=test and ember build --env=acceptance and ember build --env=development ... and more
when the artifacts are created we then create the release picking the correct ones depending on which environment we release (this done via release pipeline).
my question is can we make a frontend ember build somehow not depending on the environment?
i would like to note that we are using azure service fabric.
I don't think there is anyway around multiple Ember builds because each one will be different (i.e. production vs. development).
You can batch together each build inside one CI build/build task and produce artifact(s) to be used in your release pipeline.
Run the following command once for each environment you have (assuming you are using Ember-CLI) sequentially in one build task.
ember build --environment={{YOUR-ENV-HERE}} --output-path="dist/{{YOUR-ENV-HERE}}/"
You can then either upload the entire dist/ folder as an artifact and scope each environment in your release pipeline to the corresponding artifact subdirectory, or you can upload each folder inside /dist as an individual artifact and scope each environment in your release pipeline to its corresponding artifact.
only the configuration it changes. basically the api endpoints

Google Cloud Build

Hi I am new to Google Cloud Platform. I want to build an Java application which should be built using Google Cloud Build without docker containers. And also the built application to be tested and artifact to be saved in bucket. Can anyone help me on this ?
Cloud Build is conceptually a pipeline mechanism that takes some set of files as input (commonly in some source repo) and applies a number of processing steps to the files including steps that produce output: file(s) | step-1 | step-2 | ... | step-n.
Most of the examples show Cloud Build producing Docker images but this underplays all the many things it can do.
Importantly, each of the processors (steps) must be a Docker containers but the input and output need not be docker images.
You can use javac or mvn or gradle steps to compile your code and then use the gsutil step to copy the war or jar to Google Cloud Storage.
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/javac
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/mvn
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gradle
https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/gsutil
Since you mentioned that you without docker container, I assume you want to deploy your application not in docker image. You can deploy your app into Google App Engine Standard. So in how to deploy into App Engine, you can refer to this documentation: https://cloud.google.com/build/docs/deploying-builds/deploy-appengine
To run the application on App Engine, you create app.yaml on your project Then you put these lines inside app.yaml
runtime: java11
entrypoint: java -Xmx64m -jar {your application artifact in jar file}```

Webpack: Should I build bundle on production server or build it locally and then upload?

I am deploying a React app on AWS Elastic Beanstalk. I bundle the app using webpack. However, I'm slightly confused about what best practices are from the production build process. Should I build the app locally (with NODE_ENV=production) using webpack, and then just upload the resultant bundle.js file, along with all node_modules to the Elasticbeanstalk instance? Or, should I upload all the source files, and run webpack on the actual cloud AWS server during deployment?
You should never build for production locally (unless you're the only developer).
Ideally, you have a build process that gets triggered manually or automatically from a git commit which then builds your project for production for you.
By using a centralized build process, you can then be sure that all your builds are built the same way (e.g. same node version, same npm or yarn version).
Both approaches are not really good to be honest. Local building is not a best way to build anything you want to have on production. You might have packages locally that may have inpact on what you're building. Same applies to the OS your doing it on.
And, again, same applies to the building during deployment. As the name of 'deployments' stands for, it's deploying. Just placing your application setup on the server so it may serve as it is supposed to.
That's the point where all CI/CD comes in. Having those kinds of solutions guarantee that each build is done with the same steps and on the same solution stack. No difference between each build is desired, because it allows you to assume that any bug or a change comparing to the 'desing' is because of the code, not environment it was build within.
Assuming that you're the only developer here (because you're asking for such a thing), CI/CD might be definitive overkill here, so just create shell script with steps and use Docker as the environment for build, so it stays the same between each build. That's the closest to the CI/CD option you can get without a hassle.