Hyperledger fabric node sdk deploy.js example failing - blockchain

I'm following these instructions for setting up hyperledger fabric
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
but when I run deploy.js
info: Returning a new winston logger with default configurations
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric CA service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a CryptoKeyStore to save keys, using the store: {"opts":{"path":"/home/ubuntu/.hfc-key-store"}}
I'm able to use the docker cli but not node sdk.
Failed to load user "admin" from local key value store
How do I store admin user ?

Fixed after installing couchdb.
docker pull couchdb
docker run -d -p 5984:5984 --name my-couchdb couchdb

The certificate authorite services in the docker compose yaml file have a volumes section e.g.:
ccenv_latest:
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ccenv_snapshot:
volumes:
- ./ccenv:/opt/gopath/src/github.com/hyperledger/fabric/orderer/ccenv
ca:
volumes:
- ./tmp/ca:/.fabric-ca
You need to make sure the local path is valid, so in the above configuration you need to have a ./tmp/ccenv and ./tmp/ca directories on the same level as the docker-compose yaml file.

Related

Google Cloud: ERROR: Reachability Check failed

I followed this answer already. But it didn't help, also, I re-installed gcloud CLI, but now I am not able to install CLI anymore because of the following error.
Here is my output for ./google-cloud-sdk/bin/gcloud init
ERROR: Reachability Check failed.
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with httplib2 (SSLCertVerificationError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with httplib2 (SSLCertVerificationError)
Cannot reach https://cloudresourcemanager.googleapis.com/v1beta1/projects with requests (SSLError)
Cannot reach https://www.googleapis.com/auth/cloud-platform with requests (SSLError)
Network connection problems may be due to proxy or firewall settings.
Also, I am not behind any corporate proxy.
It was working perfectly few days ago, until today.I did not changed any settings whatsoever, I didn't install any new services whatsoever.
Output for ./google-cloud-sdk/bin/gcloud info.
./google-cloud-sdk/bin/gcloud info
Google Cloud SDK [354.0.0]
Python Version: [3.7.9 (v3.7.9:13c94747c7, Aug 15 2020, 01:31:08) [Clang 6.0 (clang-600.0.57)]]
Python Location: [/Users/myname/.config/gcloud/virtenv/bin/python3]
Site Packages: [Enabled]
Installation Root: [/Users/myname/Downloads/google-cloud-sdk]
Installed Components:
gsutil: [4.67]
core: [2021.08.20]
bq: [2.0.71]
System PATH: [/Users/myname/.config/gcloud/virtenv/bin:/Users/myname/Downloads/apache-maven-3.8.4/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin:/usr/local/munki:/usr/local/opt/go/libexec/bin:/Users/myname/go/bin]
Python PATH: [/Users/myname/Downloads/./google-cloud-sdk/lib/third_party:/Users/myname/Downloads/google-cloud-sdk/lib:/Library/Frameworks/Python.framework/Versions/3.7/lib/python37.zip:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7:/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/lib-dynload:/Users/myname/.config/gcloud/virtenv/lib/python3.7/site-packages]
Cloud SDK on PATH: [False]
Kubectl on PATH: [/usr/local/bin/kubectl]
Installation Properties: [/Users/myname/Downloads/google-cloud-sdk/properties]
User Config Directory: [/Users/myname/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/myname/.config/gcloud/configurations/config_default]
Account: [None]
Project: [None]
Current Properties:
[core]
disable_usage_reporting: [True]
Logs Directory: [/Users/myname/.config/gcloud/logs]
Last Log File: [/Users/myname/.config/gcloud/logs/2022.08.10/15.35.06.807614.log]
git: [git version 2.32.0 (Apple Git-132)]
ssh: [OpenSSH_8.1p1, LibreSSL 2.7.3]
Update on this, just disable the ssl validation and everything will work.
gcloud config set auth/disable_ssl_validation True

Django's ./manage.py always causes a skaffold rebuild: is there a way to prevent this?

I develop in local k8s cluster with minikube and skaffold. Using Django and DRF for the API.
I'm working on a number of models.py and one thing that is starting to get annoying is anytime I run a ./manage.py command like (showmigrations, makemigrations, etc.) it triggers a skaffold rebuild of the API nodes. It takes less than 10 seconds, but getting annoying none the less.
What should I exclude/include specifically from my skaffold.yaml to prevent this?
apiVersion: skaffold/v2beta12
kind: Config
build:
artifacts:
- image: postgres
context: postgres
sync:
manual:
- src: "**/*.sql"
dest: .
docker:
dockerfile: Dockerfile.dev
- image: api
context: api
sync:
manual:
- src: "**/*.py"
dest: .
docker:
dockerfile: Dockerfile.dev
local:
push: false
deploy:
kubectl:
manifests:
- k8s/ingress/development.yaml
- k8s/postgres/development.yaml
- k8s/api/development.yaml
defaultNamespace: development
It seems that ./manage.py must be recording some state locally, and thus triggering a rebuild. You need to add these state files to your .dockerignore.
Skaffold normally logs at a warning level, which suppresses details of what triggers sync or rebuilds. Run Skaffold with -v info and you'll see more detail:
$ skaffold dev -v info
...
[node] Example app listening on port 3000!
INFO[0336] files added: [backend/src/foo]
INFO[0336] Changed file src/foo does not match any sync pattern. Skipping sync
Generating tags...
- node-example -> node-example:v1.20.0-8-gc9335b0ad-dirty
INFO[0336] Tags generated in 80.293621ms
Checking cache...
- node-example: Not found. Building
INFO[0336] Cache check completed in 1.844615ms
Found [minikube] context, using local docker daemon.
Building [node-example]...

How to fix ”unable to prepare context: unable to evaluate symlinks in Dockerfile path” error in circleci

I'm setting up circle-ci to automatically build/deploy to AWS ECR &ECS.
But build is failed due to no Dockerfile.
Maybe this is because I set docker-compose for multiple docker images.
But I don't know how to resolve this issue.
Is there no way to make DockerFile instead of docker-compose?
front: React
backend: Golang
ci-tool: circle-ci
db: mysql
article
 ├ .circleci
 ├ client
 ├ api
 └ docker-compose.yml
I set .circleci/config.yml.
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.0.0
aws-ecs: circleci/aws-ecs#0.0.8
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'article-task-jpskgc'
cluster-name: 'article-cluster-jpskgc'
service-name: 'article-service-jpskgc'
container-image-name-updates: 'container=article-container-jpskgc,tag=${CIRCLE_SHA1}'
Here is the source code in github.
https://github.com/jpskgc/article
I expect build/deploy via circle-ci to ECR/ECS to success, but it actually fails.
This is the error log on circle-ci.
Build docker image
Exit code: 1
#!/bin/bash -eo pipefail
docker build \
\
-f Dockerfile \
-t $AWS_ECR_ACCOUNT_URL/article-ecr-jpskgc:${CIRCLE_SHA1} \
.
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/circleci/project/Dockerfile: no such file or directory
Exited with code 1
You must use a Dockerfile, check out the documentation for the orb you are using. Please read through them here. Also docker-compose ≠ docker, therefore I will confirm that one cannot be used in substitution for the other.
Given your docker-compose.yml, I have a few suggestions for your general setup and CI.
For reference here is the docker-compose.yml in question:
version: '3'
services:
db:
image: mysql
ports:
- '3306:3306'
environment:
MYSQL_ROOT_PASSWORD: root
MYSQL_DATABASE: article
MYSQL_USER: docker
MYSQL_PASSWORD: docker
nginx:
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- '3050:80'
api:
build:
dockerfile: Dockerfile.dev
context: ./api
volumes:
- ./api:/app
ports:
- 2345:2345
depends_on:
- db
tty: true
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- 3000:3000
From the above we have the various components, just as you have stated:
MySQL Database
Nginx Loadbalancer
Client App
API Server
Here are my recommendations for each component:
MySQL Database
Since you are deploying to AWS I recommend deploying a MySQL instance on the free tier, please follow this documentation: https://aws.amazon.com/rds/free. With this you can remove your database from CI, which is recommended as ECS is not the ideal service to run a MySQL server.
Nginx Loadbalancer
Because you are using ECS, this is not required as AWS handles all load balancing for you and is redundant.
Client App
Because this is a react application, you shouldn't deploy to ECS -- this is not cost effective you would rather deploy this to Amazon S3. There are many resources on how to do this. You may follow this guide though you may have to make a few change based of the structure of your repository.
This will reduce your overall cost and it makes more sense than an entire Docker container running just to serve static files.
API Server
This is the only thing that should be running in ECS, and all you need to do is point to the correct Dockerfile in your configuration for it be built and pushed successfully.
You may therefore edit your circle ci config as follows, assuming we are using the same Dockerfile in your docker-compose.yml:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
region: AWS_REGION
dockerfile: Dockerfile.dev
path: ./api
account-url: AWS_ECR_ACCOUNT_URL
repo: 'article-ecr-jpskgc'
tag: '${CIRCLE_SHA1}'
Things to Note
My answer does not include:
How to load balance your API service please follow these docs on how to do so: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
Details on setting up the MySQL server, it assumed you will follow the AWS documentation provided above.
Things you must do:
Point your client app to the API server, this will probably require a code change from what I've seen.
I want to stress that you must Load balance your API server according to these docs yet again.
You do not need to edit your docker-compose.yml

Metabase on Google App Engine

I'm trying to set up Metabase on a gcloud engine using Google Cloud SQL (MySQL).
I've got it running using this git and this app.yaml:
runtime: custom
env: flex
# Metabase does not support horizontal scaling
# https://github.com/metabase/metabase/issues/2754
# https://cloud.google.com/appengine/docs/flexible/java/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
env_variables:
# MB_JETTY_PORT: 8080
MB_DB_TYPE: mysql
MB_DB_DBNAME: [db_name]
# MB_DB_PORT: 5432
MB_DB_USER: [db_user]
MB_DB_PASS: [db_password]
# MB_DB_HOST: 127.0.0.1
CLOUD_SQL_INSTANCE: [project-id]:[location]:[instance-id]
I have 2 issues:
The Metabase fails in connecting to the Cloud SQL - the Cloud SQL is part of the same project and App Engine is authorized.
After I create my admin user in Metabase, I am only able to login for a few seconds (and only sometimes), but it keeps throwing me to either /setup or /auth/login saying the password doesn't match (when it does).
I hope someone can help - thank you!
So, we just got metabase running in Google App Engine with a Cloud SQL instance running PostgreSQL and these are the steps we went through.
First, create a Dockerfile:
FROM gcr.io/google-appengine/openjdk:8
EXPOSE 8080
ENV JAVA_OPTS "-XX:+IgnoreUnrecognizedVMOptions -Dfile.encoding=UTF-8 --add-opens=java.base/java.net=ALL-UNNAMED --add-modules=java.xml.bind"
ENV JAVA_TOOL_OPTIONS "-Xmx1g"
ADD https://downloads.metabase.com/enterprise/v1.1.6/metabase.jar $APP_DESTINATION
We tried pushing the memory further down, but 1 GB seemed to be the sweet spot. On to the app.yaml:
runtime: custom
env: flex
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 1
disk_size_gb: 10
readiness_check:
path: "/api/health"
check_interval_sec: 5
timeout_sec: 5
failure_threshold: 2
success_threshold: 2
app_start_timeout_sec: 600
beta_settings:
cloud_sql_instances: <Instance-Connection-Name>=tcp:5432
env_variables:
MB_DB_DBNAME: 'metabase'
MB_DB_TYPE: 'postgres'
MB_DB_HOST: '172.17.0.1'
MB_DB_PORT: '5432'
MB_DB_USER: '<username>'
MB_DB_PASS: '<password>'
MB_JETTY_PORT: '8080'
Note the beta_settings field at the bottom, which handles what akilesh raj was doing manually. Also, the trailing =tcp:5432 is required, since metabase does not support unix sockets yet.
Relevant documentation can be found here.
Although I am not sure of the reason, I think authorizing the service account of App engine is not enough for accessing cloud SQL.
In order to authorize your App to access your Cloud SQL you can do either of both methods:
Within the app.yaml file, configure an environment variable pointing to a a service account key file with a correct authorization configuration to Cloud SQL :
env_variables:
GOOGLE_APPLICATION_CREDENTIALS=[YOURKEYFILE].json
Your code executes a fetch of an authorized service account key from a bucket, and loads it afterwards with the help of the Cloud storage Client library. Seeing your runtime is custom, the pseudocode which would be translated into the code you use is the following:
.....
It is better to use the Cloud proxy to connect to the SQL instances. This way you do not have to authorize the instances in CloudSQL every time there is a new instance.
More on CloudProxy here
As for setting up Metabase in the Google App Engine, I am including the app.yaml and Dockerfile below.
The app.yaml file,
runtime: custom
env: flex
manual_scaling:
instances: 1
env variables:
MB_DB_TYPE: mysql
MB_DB_DBNAME: metabase
MB_DB_PORT: 3306
MB_DB_USER: root
MB_DB_PASS: password
MB_DB_HOST: 127.0.0.1
METABASE_SQL_INSTANCE: instance_name
The Dockerfile,
FROM gcr.io/google-appengine/openjdk:8
# Set locale to UTF-8
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
# Install CloudProxy
ADD https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 ./cloud_sql_proxy
RUN chmod +x ./cloud_sql_proxy
#Download the latest version of Metabase
ADD http://downloads.metabase.com/v0.21.1/metabase.jar ./metabase.jar
CMD nohup ./cloud_sql_proxy -instances=$METABASE_SQL_INSTANCE=tcp:$MB_DB_PORT & java -jar /startup/metabase.jar

Docker for AWS and Selenium Grid - Connection refused / No route to host (Host unreachable)

What I am trying to achieve is a scalable and on-demand test infrastructure using Selenium Grid.
I can get everything up and running but what I end up with is this:
Here are all the pieces:
Docker for AWS (CloudFormation Stack)
docker-selenium
Docker compose file (below)
The "implied" software used are:
Docker swarm
Stacks
Here is what I can accomplish:
Create, log into, and ping all hosts & nodes within the stack, following the guidelines here: deploy Docker for AWS
Deploy using the compose the file at the end of this inquiry by running:
docker stack deploy -c docker-compose.yml grid
View Selenium Grid console using the public facing DNS name automatically provided by AWS (upon successful creation of the stack). Here is a helpful entry on the subject: Docker Swarm Mode.
Here is are the contents of the compose file I am using:
version: '3'
services:
hub:
image: selenium/hub:3.4.0-chromium
ports:
- 4444:4444
networks:
- selenium
environment:
- JAVA_OPTS=-Xmx1024m
deploy:
update_config:
parallelism: 1
delay: 10s
placement:
constraints: [node.role == manager]
chrome:
image: selenium/node-chrome:3.4.0-chromium
networks:
- selenium
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
deploy:
placement:
constraints: [node.role == worker]
firefox:
image: selenium/node-firefox:3.4.0-chromium
networks:
- selenium
depends_on:
- hub
environment:
- HUB_PORT_4444_TCP_ADDR=hub
- HUB_PORT_4444_TCP_PORT=4444
deploy:
placement:
constraints: [node.role == worker]
networks:
selenium:
Any guidance on this issue will be greatly appreciated. Thank you.
I have also tried opening up ports across the swarm:
swarm-exec docker service update --publish-add 5555:5555 gird
A quick Google brought up https://github.com/SeleniumHQ/docker-selenium/issues/255. You need to add the following to the Chrome and Firefox nodes:
entrypoint: bash -c 'SE_OPTS="-host $$HOSTNAME" /opt/bin/entry_point.sh'
This is because the containers have two IP addresses in Swarm Mode and the nodes are picking up the wrong address and advertising that to the hub. This change will have the nodes advertise their hostname so the hub can find the nodes by DNS instead.