Newman/postman results to prometheus for automated tests - dockerfile

I've got a task of automating postman smoke tests to run every x minutes in kubernetes cluster and pushing results into prometheus, later visualized by grafana, with alerts pushed to a mattermost channel.
I've created a custom docker image based on alpine with newman and other packages (I didn't use newman docker image because I cannot add there whatever I want), copied all my collections and env. files into the docker image as well, the command of newman run is also packed into the dockerfile (otherwise it did not work if I invoked it from the pod definition yaml in kubernetes). All what is necessary is to run the container and it creates a report placed in /newman folder inside the container.
I've created kubernetes cronjob to run the container, it runs and gets into the completed state. If I keep container open with some loop command I can login and make sure that the results are there (and they are). Since this job is ephemeral and prometheus won't have time to scrape it, I'm trying to push the results into the prometheus pushgateway (which I've deployed for this reason). I'm trying to curl the results into it (command also defined in the dockerfile), something like cat myreport.xml | curl --data-binary #- push-prometheus-pushgateway:9091/metrics/job/newman
However here is the problem: I cannot get results formatted for it to be accepted by prometheus pushgateway in any meaningful way. I did not find any custom 'reporter' either which may suit this purpose. Currently I'm using junit reporter, but I did not manage to sed/awk the output to be digested by pushgateway and for it to make any practical sense...
Had anyone done anything similar already in the past and reached some success in it?
Many thanks in advance!

You may try Postman exporter for Prometheus: https://github.com/hudeldudel/postman-exporter

Related

log producer docker container

I am trying to do performance testing with aws firelens. I have a json file with 10 sample log messages. I want to be able to produce docker logs at a set rate. ex: 10,000 log messages/sec from a docker container that will be consumed by aws firelens log collector.
Is there any open source projects that already does this? Can any of you help with creating this container?
You can try this: https://github.com/mehiX/log-generator
I made it for this purpose so if there is anything you can't do let me know and I can hopefully fix it.

How to get SonarQube results back to CodeBuild

I've seen many discussions on-line about Sonar web-hooks to send scan results to Jenkins, but as a CodePipeline acolyte, I could use some basic help with the steps to supply Sonar scan results (e.g., quality-gate pass/fail status) to the pipeline.
Is the Sonar web-hook the right way to go, or is it possible to use Sonar's API to fetch the status of a scan for a given code-project?
Our code is in BitBucket. I'm working with the AWS admin who will create the CodePipeline that fires when code is attempted to be pushed into the repo. sonar-scanner will be run, and then we'd like the pipeline to stop if the quality does not pass the Quality Gate.
If I would use a Sonar web-hook, I imagine the value for host would be, what, the AWS instance running the CodeBuild?
Any pointers, references, examples welcome.
I created a powershell to use with Azure DevOps, that possible may be migrated to some shell script that runs in the code build activity
https://github.com/michaelcostabr/SonarQubeBuildBreaker

Is it possible to directly call docker run from AWS lambda

I have a Java standalone application which I have dockerized. I want to run this docker everytime an object is put into S3 storage. On way is to do it via AWS batch which I am trying to avoid.
Is there a direct and easy way to call docker run from a lambda?
Yes and no.
What you can't do is execute docker run to run a container within the context of the Lambda call. But you can trigger a task on ECS to be executed. For this to work, you need to have a cluster set up on ECS, which means you need to pay for at least one EC2 instance. Because of that, it might be better to not use Docker, but I know too little about your application to judge that.
There are a lot of articles out there how to connect S3, Lambda and ECS. Here is a pretty in-depth article by Amazon that you might be interested in:
https://aws.amazon.com/blogs/compute/better-together-amazon-ecs-and-aws-lambda/
If you are looking for code, this repository implements what is discussed in the above article:
https://github.com/awslabs/lambda-ecs-worker-pattern
Here is a snippet we use in our Lambda function (Python) to run a Docker container from Lambda:
result = boto3.client('ecs').run_task(
cluster=cluster,
taskDefinition=task_definition,
overrides=overrides,
count=1,
startedBy='lambda'
)
We pass in the name of the cluster on which we want to run the container, as well as the task definition that defines which container to run, the resources it needs and so on. overrides is a dictionary/map with settings that you want to override in the task definition, which we use to specify the command we want to run (i.e. the argument to docker run). This enables us to use the same Lambda function to run a lot of different jobs on ECS.
Hope that points you in the right direction.
Yes. It is possible to run containers out Docker images stored in Docker Hub within AWS Lambda using SCAR.
For example, you can create a Lambda function to execute a container out of the ubuntu:16.04 image in Docker Hub as follows:
scar init ubuntu:16.04
And then you can run a command or a shell-script within that container upon each invocation of the function:
scar run scar-ubuntu-16-04 whoami
SCAR: Request Id: ed5e9f09-ce0c-11e7-8375-6fc6859242f0
Log group name: /aws/lambda/scar-ubuntu-16-04
Log stream name: 2017/11/20/[$LATEST]7e53ed01e54a451494832e21ea933fca
---------------------------------------------------------------------------
sbx_user1059
You can use your own Docker images stored in Docker Hub. Some limitations apply but it can be effectively used to run generic applications on AWS Lambda. It also features a programming model for file-processing event-driven applications. It uses uDocker under the hood.
Yes try Udocker.
Udocker is a simple tool written in Python, it has a minimal set of dependencies so that can be executed in a wide range of Linux systems.
udocker does not make use of docker nor requires its installation.
udocker "executes" the containers by simply providing a chroot like environment over the extracted container. The current implementation uses PRoot to mimic chroot without requiring privileges.
Examples
Pull from docker hub and list the pulled images.
udocker pull fedora
Create the container from a pulled image and run it.
udocker create --name=myfed fedora
udocker run myfed cat /etc/redhat-release
And also its good to check Hackernoon.
Because:
In Lambda, the only place you are allowed to write is /tmp. But udocker will attempt to write to the homedir by default. And other stuff.

How Docker and Ansible fit together to implement Continuous Delivery/Continuous Deployment

I'm new to the configuration management and deployment tools. I have to implement a Continuous Delivery/Continuous Deployment tool for one of the most interesting projects I've ever put my hands on.
First of all, individually, I'm comfortable with AWS, I know what Ansible is, the logic behind it and its purpose. I do not have same level of understanding of Docker but I got the idea. I went through a lot of Internet resources, but I can't get the the big picture.
What I've been struggling is how they fit together. Using Ansible, I can manage my Infrastructure as Code; building EC2 instances, installing packages... I can even deploy a full application by pulling its code, modify config files and start web server. Docker is, itself, a tool that packages an application and ensures that it can be run wherever you deploy it.
My problems are:
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Suppose we have a source code repository, the team members finish working on a feature and they push their work. Jenkins detects this, runs all the acceptance/unit/integration test suites and if they all passed, it declares it as a stable build. How Docker fits here? I mean when the team pushes their work, does Jenkins have to pull the Docker file source coded within the app, build the image of the application, start the container and run all the tests against it or it runs the tests the classic way and if all is good then it builds the Docker image from the Docker file and saves it in a private place?
Should Jenkins tag the final image using x.y.z for example!?
Docker containers configuration :
Suppose we have an image built by Jenkins stored somewhere, how to handle deploying the same image into different environments, and even, different configurations parameters ( Vhosts config, DB hosts, Queues URLs, S3 endpoints, etc...) What is the most flexible way to deal with this issue without breaking Docker principles? Are these configurations backed in the image when it gets build or when the container based on it is started, if so how are they injected?
Ansible and Docker:
Ansible provides a Docker module to manage Docker containers. Assuming I solved the problems mentioned above, when I want to deploy a new version x.t.z of my app, I tell Ansible to pull that image from where it was stored on, start the app container, so how to inject the configuration settings!? Does Ansible have to log in the Docker image, before it's running ( this sounds insane to me ) and use its Jinja2 templates the same way with a classic host!? If not, how is this handled?!
Excuse me if it was a long question or if I misspelled something, but this is my thinking out loud. I'm blocked for the past two weeks and I can't figure out the correct workflow. I want this to be a reference for future readers.
Please, it would very helpful to read your experiences and solutions because this looks like a common workflow.
I would like to answer in parts
How does Docker (or Ansible and Docker) extend the Continuous Integration process!?
Since docker images same everywhere, you use your docker images as if they are production images. Therefore, when somebody committed a code, you build your docker image. You run tests against it. When all tests pass, you tag that image accordingly. Since docker is fast, this is a feasible workflow.
Also docker changes are incremental; therefore, your images will have minimal impact on storage. Also when your tests fail, you may also choose to save that image too. In this way, developer will pull that image and investigate easily why your tests failed. Developer may choose to run tests in their machine too since docker images in jenkins and their machine are not different.
What this brings that all developers will have same environment, same version of all software since you decide which one will be used in docker images. I have come across to bugs that are due to differences between developer machines. For example in the same operating system, unicode settings may affect your code. But in docker images all developers will test against same settings, same version software.
Docker containers configuration :
If you are using a private repository, and you should use one, then configuration changes will not affect hard disk space much. Therefore except security configurations, such as db passwords, you can apply configuration changes to docker images(Baking the Configuration into the Container). Then you can use ansible to apply not-stored configurations to deployed images before/after startup using environment variables or Docker Volumes.
https://dantehranian.wordpress.com/2015/03/25/how-should-i-get-application-configuration-into-my-docker-containers/
Does Ansible have to log in the Docker image, before it's running (
this sounds insane to me ) and use its Jinja2 templates the same way
with a classic host!? If not, how is this handled?!
No, ansible will not log in the Docker image, but ansible with Jinja2 templates can be used to change dockerfile. You can change dockerfile with templates and can inject your configuration to different files. Tag your files accordingly and you have configured images to spin up.
Regarding your question about handling multiple environment configurations using the same Docker image, I have been planning on using a Service Discovery tool like Consul as a centralized config/property management tool. So, when you start your container up, you set an ENV var that tells it what application it is (appID), and what environment config it should use (ex: MyApplication:Dev) and it will pull its config from Consul at startup. I still have to investigate the security around Consul (as if we are storing DB connection credentials in there for example, how do we restrict who can query/update those values). I don't want to just use this for containers, but all apps in general. Another cool capability is to change the config value in Consul and have a hook back into your app to apply the changes immediately (maybe like a REST endpoint on your app to push changes down to and dynamically apply it). Of course your app has to be written to support this!
You might be interested in checking out Martin Fowler's blog articles on immutable infrastructure and on Phoenix servers.
Although not a complete solution, I have suggestions for two of your issues. Although they might not be perfect, these are the practices we are using in our workflow, and prove themselves so far.
Defining different environments - supposing you've written a different Ansible role for each environment you launch, we define an environment variable setting the environment we wish the container to belong to. We then download the suitable configuration file from an S3 bucket using the env variable set before into the container (which should be possible if you supply AWS creds or give your server an IAM role) and inject these parameters into the code when building it.
Ansible doesn't need to log into the docker app, but the solution is a bit tricky. I've tried two ways of tackling this problem, and both aren't ideal. The first one is to download the configuration file as part of the docker image command line, and build the app on container startup. While this solution works - it breaches the Docker philosophy and makes the image highly prone to build errors.
Another solution is pushing several images to your docker hub repo, and then pulling the appropriate image according to the environment at hand.
In a broader stroke, I've tried launching our app completely with Ansible and it was hell, many configuration steps are tricky and get trickier when you try to implement them as a playbook. When I switched to maintaining the severs alone with Ansible, and deploying the app itself with Docker things got a lot easier.

AWS Server setup with JIRA, Docker

I have an Amazon EC2 instance that I'd like to use as a development server for client projects as well as run JIRA. I have a domain pointed to the EC2 server IP. I'm new to docker so unsure if my approach is correct.
I'd like to have a JIRA container installed (with another jiradb MYSQL container) running at jira.domain.com as well as the potential to host client staging websites at client.domain.com which point to the client's docker containers.
I've been trying to use This JIRA docker image using the provided command
docker run --detach --publish 8080:8080 cptactionhank/atlassian-jira:latest
but the container always stops running mid setup (set up takes a while in-between steps). When I run the container again it goes back to the start of setup.
Once I have JIRA set up how would I run it under a subdomain? And how could I then have client.domain.com point to a separate docker container?
Thanks in advance!
As you probably know there's two considerations for getting Jira setup, whether as server or container:
You need to enter a license key early in the setup process (and it requires an Internet connection for verification), even if it's an evaluation
By default Jira will use its built-in (H2, IIRC) database, unless you configure an external one
So, in the case of 2) you probably want to make sure you have your external database ready and set up.
See Connecting Jira applications to external databases for preparatory steps for a variety of databases.
You didn't mention at what stage your first setup run fails, however once you've gotten past step 1) or any further successful setup, one of the first things I did, so as not to lose all work I'd done, was to commit the container!
docker commit -a 'My Name' -m 'Jira configured and set up' <container ID> myrepo/myjira:mytag
That way you don't lose all your previous work and you save your container into a new image in one fell swoop.