How to unit test a sawtooth hyperledger transaction processor - unit-testing

I having hard time figuring out how to run the unit tests of a sawtooth hyperledger transaction processor. I am following their documentation on this topic:
https://sawtooth.hyperledger.org/docs/core/releases/1.0/app_developers_guide/testing.html
However, it does not explain the modus operandi of setting up the necessary environment, etc and actually running the unit tests. I have tried building the docker compose file which seemingly tries to build and run tests:
docker-compose -f sawtooth-core/sdk/examples/xo_python/tests/test_tp_xo_python.yaml up
The docker-compose file seems to contain some environment vars such as
$SAWTOOTH_CORE
$INSTALL_TYPE
$ISOLATION_ID
Not sure what value needs to be set to the above environment variables and in my case it fails because it fails to get the values for these vars..
Any thoughts, pointers or direction on how to run the tests for the processor would be very helpful.
Many thanks!.

You can poke around the Sawtooth core repo and find the values:
https://github.com/hyperledger/sawtooth-core
SAWTOOTH_CORE is the root directory of where you cloned the sawtooth-core git repository (default is your current directory)
INSTALL_TYPE is local (there may be other values, but I do not know them)
ISOLATION_ID is the Sawtooth Version. For example, 1.1 . It is used to identify the Docker container to download.
You can run the tests through Docker with
bin/run_tests
Sawtooth testing is currently done with Jenkins CI. Start at Jenkinsfile to see how testing is done.

Related

Hyperledger fabric wont create channel in test environment when following the steps

I am getting this error when running test-network % ./network.sh createChannel -c channel1
2022-09-21 08:35:22.905 EDT [common.tools.configtxgen] doOutputBlock -> WARN 006 Genesis block does not contain a consortiums group definition. This block cannot be used for orderer bootstrap.
then it fails. I am just trying this out and I am new to deploying hyperledger so not sure what a fix would be. This is a test environment using docker.
I see that you managed to make it work by manually removing the running container. If you are running network.sh, then always use "down" option to cleanup. Refer to the link Bring up the test network.
Once you are familiar with the script and network, and if you want to retain the data, (If you are still using network.sh to bring down the network,) perhaps you could remove the down --volumes --remove-orphans from the networkDown() function call to docker-compose.
sh network.sh down
or
./network.sh down

How to create unit test cases for Ansible functionalities?

I want to add unit testing for my ansible playbook. I am new to this and have tried few things but didn't understood much. How can I start on this and write a test case properly?
Following is the simple example:
yum:
name: httpd
state: present
Ansible is not a programming language but a tool that will check the state you describe is aligned with the state of the node your run in against. So you cannot unit tests your tasks. They are in a certain way tests by themselves already. The underlying ansible binary that runs those task has unit tests itself used during its development.
Your example above is asking ansible to test if httpd is present on the target machine and will return ok if this is the case, changed if it had to install the package to fulfill the requirement, or error if something went wrong.
Meanwhile, it is not because you cannot unit test your ansible code that no tests are possible at all. You can perform basic static checks with yammlint and ansible-lint. To go further, you will have to run your playbook/role/collection against a test target node.
This has become quite easy with CI that will let you spawn virtual machines or docker container from scratch and run your script to test that no error is fired, the --check option passes successfully, idempotency is obeyed (i.e. nothing should change on a second run with the same parameters), and everything works as expected (e.g. in your above case port 80 is opened and your get the default Apache web page).
You can write those kind of tests yourself (running against localhost in a test vm for example). This Mac Appstore CLI role by Geerlinguy is using such tests through travis-ci as an example.
You can also use existing tools to help you write those tests in a more structured way like molecule. Here are some example roles using it if you are interested:
Redis role by Geerlinguy
nexus3-oss role by ThoTeam [1]
[1] Note for transparency: I am the maintainer of this example repository

Divio App trouble to create project directory and clone repository

Please help I have trouble with Divio App trying to making it work.
When I press "set up project"
it gives me this
*
Creating workspace
cloning project repository
Cloning into '/c/Users/Ubisoft/Documents/iloveit'...
Bad owner or permissions on /home/divio/.ssh/config
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
There was an error trying to run a command. This is most likely
not an issue with divio-cli, but the called program itself.
Try checking the output of the command above.
The command was:
git clone git#git.divio.com:iloveit.git /c/Users/Ubisoft/Documents/iloveit
*
and in windows power shell it gives me this
Creating workspace
cloning project repository
Cloning into '/c/Users/Ubisoft/Documents/iloveit'...
Bad owner or permissions on /home/divio/.ssh/config
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
------------------------------------------------------------------------------------------------------------------------
There was an error trying to run a command. This is most likely
not an issue with divio-cli, but the called program itself.
Try checking the output of the command above.
The command was:
git clone git#git.divio.com:iloveit.git /c/Users/Ubisoft/Documents/iloveit
divio#app-1.0.0 /c/Users/Ubisoft/Documents
$
I also tried this from virtual MacOS and getting this message:
https://i.stack.imgur.com/QccvY.png
I also tried to mess arouond with creating SSH keys but didn't work out.
Can someone provide me step by step explanation how to make this wonderful app work?
In the Windows examples you show, I see:
Bad owner or permissions on /home/divio/.ssh/config
I am not sure how that has happened, but that is what is preventing your local environment from providing the expected key to the Divio Control Panel.
In the Macintosh example, the environment doesn't have a key that the Control Panel knows about.
You will need to add the key (probably from ~/.ssh/rsa_id.pub) to https://control.divio.com/account/ssh-keys/. If you don't already have a key in the Macintosh environment, you will need to set one up.
For anyone who may face this issue. On my MacOS virtual machine I managed to find solution. Looks like it is Internet Service Provider blocking port 22 something like that. Okay looks like problem resolved. I used VPN and without any hastle with SSH I got different result. Looks like it is working now not finished creating project yet but promising:
Creating workspace
cloning project repository
Cloning into '/Users/johnwick/Documents/best-project'...
Locking the website...
remote: Counting objects: 785, done.
remote: Compressing objects: 100% (739/739), done.
Unlocking the website...(385/785), 1.05 MiB | 524.00 KiB/s
remote: Total 785 (delta 112), reused 0 (delta 0)
Receiving objects: 100% (785/785), 1.77 MiB | 448.00 KiB/s, done.
Resolving deltas: 100% (112/112), done.
Checking out files: 100% (615/615), done.
downloading remote docker images
Pulling db ... done
Pulling web ... done
building local docker images
db uses an image, skipping
Building web
Step 1/7 : FROM divio/base:4.15-py3.6-slim-stretch
4.15-py3.6-slim-stretch: Pulling from divio/base

Running specific puppet tests using kitchen

I am using kitchen to run inspec tests for the puppet repository. I am able to apply the entire puppet catalogue ina vagrant box and then run tests for it. But what if I want to run a specific module in the puppet code base alone? I don't want to apply the entire catalogue every single time.
Can kitchen apply specific modules instead of the entire catalogue? And then test for those modules as well?
Something like how in rspec I can specify a test case I want to run.
kitchen converge --path-to-file
kitchen verify --path-to-test-file

Docker on EC2, RUN command in dockerfile not reading environment variable

I have two elastic-beanstalk environments on AWS: development and production. I'm running a glassfish server on each instance and it is requested that the same application package be deployable in production and in development environment, without requiring two different .EAR files.The two instance differ in size: the dev has a micro instance while the production has a medium instance, therefore I need to deploy two different configuration files for glassfish, one for each environment.
The main problem is that the file has to be in the glassfish config directory before the server starts, therefore I thought it could be better moving it while the container was created.
Of course each environment uses a docker container to host the glassfish instance, so my first thought was to configure an environment variable for the elastic-beanstalk. In this case
ypenvironment = dev
for the development environment and
ypenvironment = pro
for the production environment. Then in my DOCKERFILE I put this statement in the RUN command:
RUN if [ "$ypenvironment"="pro" ] ; then \
mv --force /var/app/GF_domain.xml /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
elif [ "$ypenvironment"="dev" ] ; then \
mv --force /var/app/GF_domain.xml.dev /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
fi
unfortunately, when the startup finishes, both GF_domain files are still in var/app.
Then I red that the RUN command runs things BEFORE the container is fully loaded, maybe missing the elastic-beanstalk-injected variables. So I tried to move the code to the ENTRYPOINT directive. No luck again, the container startup fails. Also tried the
ENTRYPOINT ["command", "param"]
syntax, but it didn't work giving a
System error: exec: "if": executable file not found in $PATH
Thus I'm stuck.
You need:
1/ Not to use entrypoint (or at least use a sh -c 'if...' syntax): that is for runtime execution, not compile-time image build.
2/ to use build-time variables (--build-arg):
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image.
However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
In your case, your Dockefile should include:
ENV ypenvironment
Then docker build --build-arg ypenvironment=dev ... myDevImage
You will build 2 different images (based on the same Dockerfile)
I need to be able to use the same EAR package for dev and pro environments,
Then you want your ENTRYPOINT, when run, to move a file depending on the value of an environment variable.
Your Dockerfile still needs to include:
ENV ypenvironment
But you need to run your one image with
docker run -x ypenvironment=dev ...
Make sure your script (referenced by your entrypoint) includes the if [ "$ypenvironment"="pro" ] ; then... you mention in your question, plus the actual launch (in foreground) of your app.
Your script needs to not exit right away, or your container would switch to exit status right after having started.
When working with Docker you must differentiate between build-time actions and run-time actions.
Dockerfiles are used for building Docker images, not for deploying containers. This means that all the commands in the Dockerfile are executed when you build the Docker image, not when you deploy a container from it.
The CMD and ENTRYPOINT commands are special build-time commands which tell Docker what command to execute when a container is deployed from that image.
Now, in your case a better approach would be to check if Glassfish supports environment variables inside domain.xml (or somewhere else). If it does, you can use the same domain.xml file for both environments, and have the same Docker image for both of them. You then differentiate between the environments by injecting run-time environment variables to the containers by using docker run -e "VAR=value" when running locally, and by using the Environment Properties configuration section when deploying on Elastic Beanstalk.
Edit: In case you can't use environment variables inside domain.xml, you can solve the problem by starting the container with a script which reads the runtime environment variables and puts their values in the correct places in domain.xml using sed, then starts your application as usual. You can find an example in this post.