I am creating a Hyperledger fabric network with 4 node docker swarm node.
I want to create 5 peers, 1 org, 4 orderer, 1 ca.
in node 1. ca, cli, 2 peer(peer0.org1,peer1.org1)
in node 2. 2 orderer 2 peer(peer2.org1,peer3.org1)
in node 3. 2 orderer, 1 peer (peer4.org1)
in node 4. 1 peer (peer5.org1)
When I run docker stack deploy -c docker-compose.yaml hlf.
In manager node 1 peer 2 orderer 1 ca and couchdb are up and running but in other nodes are not. Only couchdb's are runing in other nodes.
When I run docker service logs serviceId. I saw below error.
Cannot run peer because error when setting up MSP of type bccsp from directory /etc/hyperledger/crypto/peer/msp: KeyMaterial not found in SigningIdentityInfo
I created ca msp peers tlsca users files inside org1.example.com in node 1 and copied and pasted to other nodes with same directory.
I couldn't understand what is wrong with this problem. Other peers are same just port and peer number is changed.
peer4org1examplecom:
deploy:
placement:
constraints:
- node.labels.name == worker3
container_name: peer1.org1.example.com
image: hyperledger/fabric-peer:2.1
# extends:
# file: base.yaml
# service: peer-base
environment:
- FABRIC_LOGGING_SPEC=info
- ORDERER_GENERAL_LOGLEVEL=info
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_test
- CORE_PEER_ID=peer4.org1.example.com
- CORE_PEER_ADDRESS=peer4.org1.example.com:11051
- CORE_PEER_LISTENADDRESS=0.0.0.0:11051
- CORE_PEER_CHAINCODEADDRESS=peer4.org1.example.com:11052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:11052
# Exposed for discovery Service
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer4.org1.example.com:8051
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb4:5984
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_METRICS_PROVIDER=prometheus
# - CORE_OPERATIONS_LISTENADDRESS=0.0.0.0:9440
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto/peer/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto/peer/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto/peer/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto/peer/msp
#extends base.yaml
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=artifacts_default
- CORE_LOGGING_LEVEL=INFO
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
ports:
- 11051:9051
volumes:
- /home/ubuntu/hlf-docker-swarm/test-network/artifacts/crypto-config/peerOrganizations/org1.example.com/peers/peer4.org1.example.com/msp:/etc/hyperledger/crypto/peer/msp
- /home/ubuntu/hlf-docker-swarm/test-network/artifacts/crypto-config/peerOrganizations/org1.example.com/peers/peer4.org1.example.com/tls:/etc/hyperledger/crypto/peer/tls
- /var/run/:/host/var/run/
- ./:/etc/hyperledger/
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start
networks:
test:
aliases:
- peer4.org1.example.com
Related
I have a GCP project where I continuously deploy changes (PRs) made to a GitHub repository to a cloud-run service using cloud build triggers
the way i set it up at first is that i use GCP GUI
this results in a trigger in cloud-build\
the cloud-build trigger has the yaml file that looks like this
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
substitutions:
_PLATFORM: managed
_SERVICE_NAME: bordereau
_DEPLOY_REGION: europe-west1
_LABELS: gcb-trigger-id=((a long random id goes here))
_TRIGGER_ID: ((an other long random id goes here))
_GCR_HOSTNAME: eu.gcr.io
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- bordereau
when ever this trigger is run, a new cloud-run revision is created like this
then i can create a url that points to a specific url like this
that helps me access each revision using its unique URL
i tried many ways to eddit the cloud-build YAML file to give each revision a unique URL automaticly ( not manually through the GCP GUI ) but i dont seem to find a way! i tried many keywords, and read the documentation but that didnt help either!
any help is very much appreciated.
it would be great if the revision URL (tag) was something unique and short like first charecters of the commit SHA or the PR number
Usually you can do like that (see step id: tag)
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- .
- '-f'
- Dockerfile
id: Build
- name: gcr.io/cloud-builders/docker
args:
- push
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
id: Push
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- run
- services
- update
- $_SERVICE_NAME
- '--platform=managed'
- '--image=$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- >-
--labels=managed-by=gcp-cloud-build-deploy-cloud-run,commit-sha=$COMMIT_SHA,gcb-build-id=$BUILD_ID,gcb-trigger-id=$_TRIGGER_ID,$_LABELS
- '--region=$_DEPLOY_REGION'
- '--quiet'
id: Deploy
entrypoint: gcloud
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk:slim'
args:
- -c
- |
export sha=$COMMIT_SHA
export CUSTOM_TAG=${sha:0:8}
export CURRENT_REV=$(gcloud alpha run services describe $_SERVICE_NAME --region=$_DEPLOY_REGION --platform=managed --format='value(status.traffic[0].revisionName)')
gcloud run services update-traffic $_SERVICE_NAME --set-tags=$$CUSTOM_TAG=$$CURRENT_REV --region=$_DEPLOY_REGION --platform=managed
id: tag
entrypoint: bash
images:
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
options:
substitutionOption: ALLOW_LOOSE
substitutions:
_PLATFORM: managed
_SERVICE_NAME: bordereau
_DEPLOY_REGION: europe-west1
_LABELS: gcb-trigger-id=((a long random id goes here))
_TRIGGER_ID: ((an other long random id goes here))
_GCR_HOSTNAME: eu.gcr.io
tags:
- gcp-cloud-build-deploy-cloud-run
- gcp-cloud-build-deploy-cloud-run-managed
- bordereau
In that custom tag, I put the 8 first character of the commit SHA
You can note the weird env var COMMIT_SHA copy to a local env var. It's a strange thing with CloudBuild.
I have a circleCI configuration to run my tests before merge to the master, I start my server to do my tests and the I should connect to my RDS database and its protected with security groups I tried to whitelist circleci ip to allow this happen but with no luck
version: 2.1
orbs:
aws-white-list-circleci-ip: configure/aws-white-list-circleci-ip#1.0.0
aws-cli: circleci/aws-cli#0.1.13
jobs:
aws_setup:
docker:
- image: cimg/python:3.11.0
steps:
- aws-cli/install
- aws-white-list-circleci-ip/add
build:
docker:
- image: cimg/node:18.4
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- run:
name: start the server
command: npm start
background: true
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
- aws-white-list-circleci-ip/remove
workflows:
build-workflow:
jobs:
- aws_setup:
context: aws_context
- build:
requires:
- aws_setup
context: aws_context
my context environment
AWS_ACCESS_KEY_ID
AWS_DEFAULT_REGION
AWS_SECRET_ACCESS_KEY
GROUPID
the error
the orbs I am using
https://circleci.com/developer/orbs/orb/configure/aws-white-list-circleci-ip
I figure it out
version: 2.1
orbs:
aws-cli: circleci/aws-cli#0.1.13
jobs:
build:
docker:
- image: cimg/python:3.11.0-node
steps:
- checkout
- run: node --version
- restore_cache:
name: Restore Npm Package Cache
keys:
# Find a cache corresponding to this specific package-lock.json checksum
# when this file is changed, this key will fail
- v1-npm-deps-{{ checksum "package-lock.json" }}
# Find the most recently generated cache used from any branch
- v1-npm-deps-
- run: npm install
- aws-cli/install
- run:
command: |
public_ip_address=$(wget -qO- http://checkip.amazonaws.com)
echo "this computers public ip address is $public_ip_address"
aws ec2 authorize-security-group-ingress --region $AWS_DEFAULT_REGION --group-id $GROUPID --ip-permissions "[{\"IpProtocol\": \"tcp\", \"FromPort\": 22, \"ToPort\": 7000, \"IpRanges\": [{\"CidrIp\": \"${public_ip_address}/32\",\"Description\":\"CircleCi\"}]}]"
- save_cache:
name: Save Npm Package Cache
key: v1-npm-deps-{{ checksum "package-lock.json" }}
paths:
- ./node_modules
- run:
name: run tests
command: npm run test
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
build-workflow:
jobs:
- build:
context: aws_context
I've been trying to debug my kibana-elasticsearch service for a couple of days now.
I want to be able to access my Docker Kibana container ui in a web browser on a separate host but the service is not available?
It only for exploring and testing so I don't need any authentication on it for now. I've locked down the security group to trusted ip addresses.
Both Kibana and Elasticsearch containers are running. I can access Kibana via localhost:5601.
After trawling through loads of posts and documentation I know the issue is with binding the container to a an ip address to make it accessible to external hosts.
my kibana config file:
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"
my docker compose file:
version: '3.6'
services:
# creates a fluentd service with mountpoints
fluentd:
container_name: fluentd
user: root
build:
context: .
image: fluentd
ports:
- "9880:9880"
volumes:
# LOCAL_HOST_DIR:CONTAINER_DIR
- /var/lib/docker/containers:/fluentd/log/containers # Example: Reading docker logs on
- ./file:/fluentd/log/files/ #Example: Reading logs from a file
- ./configurations:/fluentd/etc/ #where defualt config file is located
# - ./logs:/output/ # Example:Fluentd will collect logs and store it here for demo
logging:
driver: "local"
# This app sends logs to Fluentd endpoint via HTTP
http-myapp:
container_name: http-myapp
image: alpine
volumes:
- ./http:/app
command: [ /bin/sh , -c , "apk add --no-cache curl && chmod +x /app/http_app.sh && ./app/http_app.sh"]
# write test files to a local volume
file-myapp:
image: alpine
container_name: log-generator
# restart: always
volumes:
- ./file:/app
command: [/bin/sh, -c , "chmod +x /app/app.sh && ./app/app.sh"]
elasticsearch: # port 9200
image: docker.elastic.co/elasticsearch/elasticsearch:7.13.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
# - cluster.initial_master_nodes=elasticsearch
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xmx256m -Xms256m"
- discovery.type=single-node
volumes:
- esdata:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
kibana:
image: docker.elastic.co/kibana/kibana:7.13.1
container_name: kibana
volumes:
- ./kibana.yml:/usr/share/kibana/config/kibana.yml
ports:
- "5601:5601"
environment:
ELASTICSEARCH_URL: http://elasticsearch:9200
ELASTICSEARCH_HOSTS: http://elasticsearch:9200
SERVER_NAME: kibana
# SERVER.HOST: "0.0.0.0"
volumes:
esdata:
Hi I am new to below these installed the localstack/localstack-full and aws cli2 . Trying to add aws secret key but it throws an error like below
console
D:\aptsmt\docker\localstack>aws --endpoint-url http://localhost:4572 --region eu-west-1 secretsmanager create-secret --name dummy-secrets --secret-string file://secrets.json
Connection was closed before we received a valid response from endpoint URL: "http://localhost:4572/".
I used this version localstack/localstack:0.10.1.2 and works for mes
version: "3.7"
services:
localstack:
container_name: "${LOCALSTACK_DOCKER_NAME-localstack}"
image: localstack/localstack:0.10.1.2
hostname: localstack
networks:
- localstack-net
ports:
- "4566-4599:4566-4599"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=sns,dynamodb,s3,sqs,lambda,cloudformation,sts,iam,cloudwatch,apigateway,events
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- PORT_WEB_UI=8080
- LAMBDA_EXECUTOR=docker-reuse
- LAMBDA_REMOTE_DOCKER=false
- LAMBDA_REMOVE_CONTAINERS=true
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- ./data:/tmp/localstack
- ./bin:/docker-entrypoint-initaws.d
networks:
localstack-net:
external: false
driver: bridge
name: localstack-net
Just rerun and add --overwrite. Sometimes the localstack is slow especially if we run other items too.
I realize this question comes up a lot. I've read many threads and blog posts, but here I am. Long story short, I have a docker container running wurstmeister/zookeeper & wurstmeister/kafka, and then some services running in their own containers. I'll just mention the NoddJS one for now. Everything works fine at home, using IP addresses (not localhost) so I'm baffled at what the difference is here. On AWS, it simply "doesn't work" even though it seems to at least connect to the broker in the beginning. I'm explicitly using internal IPs in the config as I don't want this exposed to anything externally.
Reading around, I've tried 2 setups. 1 works at home (KAFKA_ADVERTISED_HOST_NAME). 1 doesn't (KAFKA_ADVERTISED_LISTENERS). Neither work on my EC2 Linux box:
Kafka docker-compose.yml
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- my-network
kafka:
image: wurstmeister/kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: <internal-ip>
KAFKA_ADVERTISED_PORT: "9092"
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-network
networks:
my-network:
NodeJS docker-compose.yml
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile
networks:
- kafka_my-network
restart: unless-stopped
ports:
- "1337:3000"
volumes:
- "/tmp:/tmp"
- "/var/log:/var/log"
networks:
kafka_my-network:
external: true
Then in NodeJS
const kafkaHost = '<internal-ip>:9092';
const client = new kafka.KafkaClient({kafkaHost});
const producer = new kafka.Producer(client)
const kakfaTopic = 'test';
producer.on('ready', function() {
console.log(`kafka producer is ready`); // I see this, so I'm assuming all is well
ready = true;
});
producer.on('error', function(err) {
console.error(err);
});
const payload = [
{
topic: kafkaTopic,
messages: JSON.stringify(myMessages);
}
]
producer.send(payload, function(err, data) {
if (err) {
console.error(`Send error ${JSON.stringify(err}`);
}
console.log(`Sent data ${JSON.stringify(data)}`);
});
When I start my NodeJS server, I see that I've connected to a Kafka broker. I can confirm as well that :9092 is open after checking w/ telnet and/or nc. Then, when it sends a request, the callback gets an empty error.
I realize KAKFA_ADVERTISED_HOST_NAME is deprecated, so in the name of completion, here is my attempt using ADVERTISED_LISTENERS which failed. With this configuration I seemed to get the same results at home as I did on EC2.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
- my-network
kafka:
image: wurstmeister/kafka
depends_on:
- zookeeper
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://<internal-ip>:9092
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- my-network
networks:
my-network:
EDIT: I will not provide this as a solution, but the bitnami image w/ the following config works. The main difference is it had a pretty straight forward README that I went through. I can't be certain if I tried the equivalent config w/ wurstmeister (I tried many - and again, at least one of which in docker containers on my own machine but not on a single EC2 instance).
Note that I did list 'kafka' with an internal IP (not loopback) in /etc/hosts for this. This should be tantamount to using the internal IP explicitly which I had done above.
version: '2'
services:
zookeeper:
image: 'bitnami/zookeeper:3'
ports:
- '2181:2181'
volumes:
- 'zookeeper_data:/bitnami'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
networks:
- my-network
kafka:
image: 'bitnami/kafka:2'
ports:
- '9092:9092'
- '29092:29092'
volumes:
- 'kafka_data:/bitnami'
- /var/run/docker.sock:/var/run/docker.sock
environment:
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,PLAINTEXT_HOST://:29092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
depends_on:
- zookeeper
networks:
- my-network
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
networks:
my-network:
driver: bridge