I want to add unit testing for my ansible playbook. I am new to this and have tried few things but didn't understood much. How can I start on this and write a test case properly?
Following is the simple example:
yum:
name: httpd
state: present
Ansible is not a programming language but a tool that will check the state you describe is aligned with the state of the node your run in against. So you cannot unit tests your tasks. They are in a certain way tests by themselves already. The underlying ansible binary that runs those task has unit tests itself used during its development.
Your example above is asking ansible to test if httpd is present on the target machine and will return ok if this is the case, changed if it had to install the package to fulfill the requirement, or error if something went wrong.
Meanwhile, it is not because you cannot unit test your ansible code that no tests are possible at all. You can perform basic static checks with yammlint and ansible-lint. To go further, you will have to run your playbook/role/collection against a test target node.
This has become quite easy with CI that will let you spawn virtual machines or docker container from scratch and run your script to test that no error is fired, the --check option passes successfully, idempotency is obeyed (i.e. nothing should change on a second run with the same parameters), and everything works as expected (e.g. in your above case port 80 is opened and your get the default Apache web page).
You can write those kind of tests yourself (running against localhost in a test vm for example). This Mac Appstore CLI role by Geerlinguy is using such tests through travis-ci as an example.
You can also use existing tools to help you write those tests in a more structured way like molecule. Here are some example roles using it if you are interested:
Redis role by Geerlinguy
nexus3-oss role by ThoTeam [1]
[1] Note for transparency: I am the maintainer of this example repository
Related
I am getting this error when running test-network % ./network.sh createChannel -c channel1
2022-09-21 08:35:22.905 EDT [common.tools.configtxgen] doOutputBlock -> WARN 006 Genesis block does not contain a consortiums group definition. This block cannot be used for orderer bootstrap.
then it fails. I am just trying this out and I am new to deploying hyperledger so not sure what a fix would be. This is a test environment using docker.
I see that you managed to make it work by manually removing the running container. If you are running network.sh, then always use "down" option to cleanup. Refer to the link Bring up the test network.
Once you are familiar with the script and network, and if you want to retain the data, (If you are still using network.sh to bring down the network,) perhaps you could remove the down --volumes --remove-orphans from the networkDown() function call to docker-compose.
sh network.sh down
or
./network.sh down
I have a Django app that is deployed on kubernetes. The container also has a mount to a persistent volume containing some files that are needed for operation. I want to have a check that will check that the files are there and accessible during runtime everytime a pod starts. The Django documentation recommends against running checks in production (the app runs in uwsgi), and because the files are only available in the production environment, the check will fail when unit tested.
What would be an acceptable process for executing the checks in production?
This is a community wiki answer posted for better visibility. Feel free to expand it.
Your use case can be addressed from Kubernetes perspective. All you have to do is to use the Startup probes:
The kubelet uses startup probes to know when a container application
has started. If such a probe is configured, it disables liveness and
readiness checks until it succeeds, making sure those probes don't
interfere with the application startup. This can be used to adopt
liveness checks on slow starting containers, avoiding them getting
killed by the kubelet before they are up and running.
With it you can use the ExecAction that would execute a specified command inside the container. The diagnostic would be considered successful if the command exits with a status code of 0. An example of a simple command check could be one that checks if a particular file exists:
exec:
command:
- stat
- /file_directory/file_name.txt
You could also use a shell script but remember that:
Command is the command line to execute inside the container, the
working directory for the command is root ('/') in the container's
filesystem. The command is simply exec'd, it is not run inside a
shell, so traditional shell instructions ('|', etc) won't work. To use
a shell, you need to explicitly call out to that shell.
I am having a django app deployed on ECS. I need to run fixtures, a management command needed to be run inside the container on my first deployment.
I want to have access to run fixtures conditionally, not fixed for first deployment only. One way I was thinking is to maintain a variable in my env, and to run fixtures in entrypoint.sh accordingly.
Is this a good way to go about it? Also, what are some other standard ways to do the same.
Let me know if I have missed some details you might need to understand my problem.
You probably need to handle it in your entrypoint.sh script only.As far as my experience goes, you won't be able to conditionally run commands without a script in case of ECS.
I having hard time figuring out how to run the unit tests of a sawtooth hyperledger transaction processor. I am following their documentation on this topic:
https://sawtooth.hyperledger.org/docs/core/releases/1.0/app_developers_guide/testing.html
However, it does not explain the modus operandi of setting up the necessary environment, etc and actually running the unit tests. I have tried building the docker compose file which seemingly tries to build and run tests:
docker-compose -f sawtooth-core/sdk/examples/xo_python/tests/test_tp_xo_python.yaml up
The docker-compose file seems to contain some environment vars such as
$SAWTOOTH_CORE
$INSTALL_TYPE
$ISOLATION_ID
Not sure what value needs to be set to the above environment variables and in my case it fails because it fails to get the values for these vars..
Any thoughts, pointers or direction on how to run the tests for the processor would be very helpful.
Many thanks!.
You can poke around the Sawtooth core repo and find the values:
https://github.com/hyperledger/sawtooth-core
SAWTOOTH_CORE is the root directory of where you cloned the sawtooth-core git repository (default is your current directory)
INSTALL_TYPE is local (there may be other values, but I do not know them)
ISOLATION_ID is the Sawtooth Version. For example, 1.1 . It is used to identify the Docker container to download.
You can run the tests through Docker with
bin/run_tests
Sawtooth testing is currently done with Jenkins CI. Start at Jenkinsfile to see how testing is done.
I am using kitchen to run inspec tests for the puppet repository. I am able to apply the entire puppet catalogue ina vagrant box and then run tests for it. But what if I want to run a specific module in the puppet code base alone? I don't want to apply the entire catalogue every single time.
Can kitchen apply specific modules instead of the entire catalogue? And then test for those modules as well?
Something like how in rspec I can specify a test case I want to run.
kitchen converge --path-to-file
kitchen verify --path-to-test-file