hyperledger fabric - stuck at creating cli - blockchain

I was building the network at fabric level. Following this tutorial http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html
I have made changes in the following files and added 2 more peers in organisation1 only.
configtx.yaml
crypto-config.yaml
docker-compose-cli.yaml
docker-compose-couch.yaml
docker-compose-e2e.yaml
docker-compose-e2e-template.yaml
docker-compose-base.yaml
When im firing ./byfn.sh -m up command
here is the screenshot
Its getting stuck at this step. Its not even showing any error.
Im trying to add 2 more peers in first organisation. Is this the correct way? Am I doing something wrong?

It is also happened to me ,
1.sudo docker stop $(docker ps --all -q ) | docker rm $(docker ps -a -q),be attention just rm the docker related to the fabric node
reboot your computer and restart your docker service
sudo service docker start or systemctl start docker command

Related

how to reconnect to a docker logs --follow where the log file was deleted

I have a docker container running in a small AWS instance with limited disk space. The logs were getting bigger, so I used the commands below to delete the evergrowing log files:
sudo -s -H
find /var -name "*json.log" | grep docker | xargs -r rm
journalctl --vacuum-size=50M
Now I want to see what's the behaviour of one of the running docker containers, but it claims the log file has disappeared (from the rm command above):
ubuntu#x-y-z:~$ docker logs --follow name_of_running_docker_1
error from daemon in stream: Error grabbing logs: open /var/lib/docker/containers/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3-json.log: no such file or directory
I would like to be able to see again what's going on in the running container, so I tried:
sudo touch /var/lib/docker/containers/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3/d9562d25787aaf3af2a2bb7fd4bf00994f2fa1a4904979972adf817ea8fa57c3-json.log
And again docker follow, but while interacting with the software that should produce logs, I can see that nothing is happening.
Is there any way to rescue the printing into the log file again without killing (rebooting) the containers?
Is there any way to rescue the printing into the log file again without killing (rebooting) the containers?
Yes, but it's more of a trick than a real solution. You should never interact with /var/lib/docker data directly. As per Docker docs:
part of the host filesystem [which] is managed by Docker (/var/lib/docker/volumes/ on Linux). Non-Docker processes should not modify this part of the filesystem.
For this trick to work, you need to configure your Docker Daemon to keep containers alive during downtime before first running our container. For example, by setting your /etc/docker/daemon.json with:
{
"live-restore": true
}
This requires Daemon restart such as sudo systemctl restart docker.
Then create a container and delete its .log file:
$ docker run --name myhttpd -d httpd:alpine
$ sudo rm $(docker inspect myhttpd -f '{{ .LogPath }}')
# Docker is not happy
$ docker logs myhttpd
error from daemon in stream: Error grabbing logs: open /var/lib/docker/containers/xxx-json.log: no such file or directory
Restart Daemon (with live restore), this will cause Docker to somehow re-take management of our container and create our log file back. However, any logs generate before log file deletion are lost.
$ sudo systemctl restart docker
$ docker logs myhttpd # works! and log file is created back
Note: this is not a documented or official Docker feature, simply a behavior I observed with my own experimentations using Docker 19.03. It may not work with other Docker versions
With live restore enabled, our container process keeps running even though Docker Daemon is stopped. On Docker daemon restart, it probably somehow try to re-read from the still alive process stdout and stderr and redirect output to our log file (hence re-creating it)

'peer' command not found hyperledger

I'm working on this tutorial:
http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
At the section "Create & Join Channel" at the command :
peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/cacerts/ca.example.com-cert.pem
I received this error:
No command 'peer' found, did you mean:
Command 'pee' from package 'moreutils' (universe)
Command 'beer' from package 'gerstensaft' (universe)
Command 'peel' from package 'ears' (universe)
Command 'pear' from package 'php-pear' (main)
peer: command not found
Since you are following the guide, I suppose you are using Docker and it seems that you are not connected to the cli container, otherwise, it would have known the command "peer" (I might be mistaken).
To connect to the cli container:
docker exec -it cli bash
If this is not the problem, you can try the command from the bin folder :
/usr/local/bin
But this folder should be in the PATH environment variable, for example:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
This error means that your kernel cannot find the peer binaries. So it's important that the path to the peer binaries is included in your path. If you are in the directory where all the files for the hyperledger fabric are residing (ex. fabrics or fabric-samples) run:
export PATH=${PWD}/../bin:$PATH
if you are in the folder ../test-network as I am, try first these two following commands which are in the Interacting with the network section:
export PATH=${PWD}/../bin:$PATH
export FABRIC_CFG_PATH=$PWD/../config/
Then you will be able set the environmental variables which will allow you to operate the peer CLI as Org1 or Org2.
I assumed that your network is up and running.
Please check which docker image you're using to run peer commands.
run docker ps
Check the docker images name
chaincode is build and start in chaincode docker image
docker exec -it chaincode bash
and to interact and run peer commands run cli docker image
docker exec -it cli bash

docker restart container failed: "already in use", but there's no more docker image

I first got my nginx docker image:
docker pull nginx
Then I started it:
docker run -d -p 80:80 --name webserver nginx
Then I stopped it:
docker stop webserver
Then I tried to restart it:
$docker run -d -p 80:80 --name webserver nginx
docker: Error response from daemon: Conflict. The container name "/webserver" is already in use by container 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74. You have to remove (or rename) that container to be able to reuse that name..
See 'docker run --help'.
Well, it's an error. But in fact there's nothing in container list now:
docker container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Why I restart nginx image failed? How to fix it?
It is because
you have used --name switch.
container is stopped and not removed
You find it stopped
docker ps -a
You can simply start it using below command:
docker start webserver
EDIT: Alternatives
If you want to start the container with below command each time,
docker run -d -p 80:80 --name webserver nginx
then use one of the following:
method 1: use --rm switch i.e., container gets destroyed automatically as soon as it is stopped
docker run -d -p 80:80 --rm --name webserver nginx
method 2: remove it explicitly after stopping the container before starting the command that you are currently using.
docker stop <container name>
docker rm <container name>
As the error says.
You have to remove (or rename) that container to be able to reuse that name
This leaves you two options.
You may delete the container that is using the name "webserver" using the command
docker rm 036a0bcd196c5b23431dcd9876cac62082063bf62a492145dd8a55141f4dfd74
and retry.
Or you may use a different name during the run command. This is not recommended, as you no longer need that docker.
It's better to remove the unwanted docker and reuse the name.
While the great answers are correct, they didn't actually solve the problem I was facing.
How To:
Safely automate starting of named docker container regardless of its prior state
The solution is to wrap the docker run command with an additional check and either do a run or a stop + run (effectively restart with the new image) based on the result.
This achieves both of my goals:
Avoids the error
Allows me to periodically update the image (say new build) and restart safely
#!/bin/bash
# Adapt the following 3 parameters to your specific case
NAME=myname
IMAGE=myimage
RUN_OPTIONS='-d -p 8080:80'
ContainerID="$(docker ps --filter name="$NAME" -q)"
if [[ ! -z "$ContainerID" ]]; then
echo "$NAME already running as container $ContainerID: stopping ..."
docker stop "$ContainerID"
fi
echo "Starting $NAME ..."
exec docker run --rm --name "$NAME" $RUN_OPTIONS "$IMAGE"
Now I can run (or stop + start if already running) the $NAME docker container in a idempotent way, without worrying about this possible failure.

could not access localhost:7050/chain when creating a fabric network

I followed this tutorial to setup the fabric environment using java https://github.com/hyperledger/fabric/blob/master/docs/Setup/JAVAChaincode.md.
I have also successfully set up the environment using go language which I completed after spending hours. and now I have decided to implement fabric network https://github.com/hyperledger/fabric/blob/master/docs/Setup/Network-setup.md
I followed all the steps very carefully and I can deploy and invoke the transactions using CLI. I can even query the transactions using CLI but when I try to perform REST calls for the same purpose then I cannot access localhost:7050 from my browser while it was working when I was deploying a normal chaincode without a network. is there any fix or am I missing something obvious ?
You have to bind the port 7050 of the container to 0.0.0.0:7050 of your host machine, this can be achieved by providing -p flag (read here on publishing a port) while running docker run command for starting the container, so instead of,
docker run --rm -it -e CORE_VM_ENDPOINT=http://172.17.0.1:2375 -e CORE_LOGGING_LEVEL=DEBUG -e CORE_PEER_ID=vp0 -e CORE_PEER_ADDRESSAUTODETECT=true hyperledger/fabric-peer peer node start
use the following command to start the container,
docker run -p 0.0.0.0:7050:7050 --rm -it -e CORE_VM_ENDPOINT=http://172.17.0.1:2375 -e CORE_LOGGING_LEVEL=DEBUG -e CORE_PEER_ID=vp0 -e CORE_PEER_ADDRESSAUTODETECT=true hyperledger/fabric-peer peer node start
This should fix your problem.

AWS EB, Play Framework and Docker: Application Already running

I am running a Play 2.2.3 web application on AWS Elastic Beanstalk, using SBTs ability to generate Docker images. Uploading the image from the EB administration interface usually works, but sometimes it gets into a state where I consistently get the following error:
Docker container quit unexpectedly on Thu Nov 27 10:05:37 UTC 2014:
Play server process ID is 1 This application is already running (Or
delete /opt/docker/RUNNING_PID file).
And deployment fails. I cannot get out of this by doing anything else than terminating the environment and setting it up again. How can I avoid that the environment gets into this state?
Sounds like you may be running into the infamous Pid 1 issue. Docker uses a new pid namespace for each container, which means first process gets PID 1. PID 1 is a special ID which should be used only by processes designed to use it. Could you try using Supervisord instead of having playframework running as the primary processes and see if that resolves your issue? Hopefully, supervisord handles Amazon's termination commands better than the play framework.
#dkm was having the same issue with my dockerized play app. I package my apps as standalone for production using '$ sbt clean dist` commands. This produces a .zip file that you can deploy to some folder in your docker container like /var/www/xxxx.
Get a bash shell into your container: $ docker run -it <your image name> /bin/bash
Example: docker run -it centos/myapp /bin/bash
Once the app is there you'll have to create an executable bash script I called mine startapp and the contents should be something like this:
Create the script file in the docker container:
$ touch startapp && chmod +x startapp
$ vi startapp
Add the execute command & any required configurations:
#!/bin/bash
/var/www/<your app name>/bin/<your app name> -Dhttp.port=80 -Dconfig.file=/var/www/pointflow/conf/<your app conf. file>
Save the startapp script then from a new terminal and then you must commit your changes to your container's image so it will be available from here on out:
Get the running container's current ID:
$ docker ps
Commit/Save the changes
$ docker commit <your running containerID> <your image's name>
Example: docker commit 1bce234 centos/myappsname
Now for the grand finale you can docker stop or exit out of the running container's bash. Next start the play app using the following docker command:
$ docker run -d -p 80:80 <your image's name> /bin/sh startapp
Example: docker run -d -p 80:80 centos/myapp /bin/sh startapp
Run docker ps to see if your app is running. You see something similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eae9bc8371 centos/myapp:latest "/bin/sh startapp" 13 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp suspicious_heisenberg
Open a browser and visit your new dockerized app
Hope this helps...