I am trying to set up Hyperledger Fabric on multiple hosts using different AWS EC2 instances. I am having a problem setting up the peer on the second instance.
I have been following the guide in this medium article:
https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f
I got as far as step 6 under Setting Up the Network which sets up a peer on the second instance.
I used this command:
docker run --rm -it --network="my-net" --link orderer.example.com:orderer.example.com --link peer0.org1.example.com:peer0.org1.example.com --name peer1.org1.example.com -p 9051:7051 -p 9053:7053 -e CORE_LEDGER_STATE_STATEDATABASE=CouchDB -e CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb1:5984 -e CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME= -e CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD= -e CORE_PEER_ADDRESSAUTODETECT=true -e CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock -e FABRIC_LOGGING_SPEC=DEBUG -e CORE_PEER_NETWORKID=peer1.org1.example.com -e CORE_NEXT=true -e CORE_PEER_ENDORSER_ENABLED=true -e CORE_PEER_ID=peer1.org1.example.com -e CORE_PEER_PROFILE_ENABLED=true -e CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer.example.com:7050 -e CORE_PEER_GOSSIP_ORGLEADER=true -e CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer1.org1.example.com:7051 -e CORE_PEER_GOSSIP_IGNORESECURITY=true -e CORE_PEER_LOCALMSPID=Org1MSP -e CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=my-net -e CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.example.com:7051 -e CORE_PEER_GOSSIP_USELEADERELECTION=false -e CORE_PEER_TLS_ENABLED=false -v /var/run/:/host/var/run/ -v $(pwd)/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp -w /opt/gopath/src/github.com/hyperledger/fabric/peer hyperledger/fabric-peer peer node start
And it resulted in this error:
2019-04-24 20:06:29.798 UTC [msp] getSigningIdentityFromConf -> DEBU 036 Could not find SKI [8d25ff0a9c02de411acf743e7a6577fac0573d0d2561f988fb2305be74918de7], trying KeyMaterial field: Key with SKI 8d25ff0a9c02de411acf743e7a6577fac0573d0d2561f988fb2305be74918de7 not found in /etc/hyperledger/fabric/msp/keystore
Failed getting key for SKI [[141 37 255 10 156 2 222 65 26 207 116 62 122 101 119 250 192 87 61 13 37 97 249 136 251 35 5 190 116 145 141 231]]
github.com/hyperledger/fabric/bccsp/sw.(*CSP).GetKey
/opt/gopath/src/github.com/hyperledger/fabric/bccsp/sw/impl.go:170
github.com/hyperledger/fabric/msp.(*bccspmsp).getSigningIdentityFromConf
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:181
github.com/hyperledger/fabric/msp.(*bccspmsp).setupSigningIdentity
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:267
github.com/hyperledger/fabric/msp.(*bccspmsp).preSetupV1
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:413
github.com/hyperledger/fabric/msp.(*bccspmsp).setupV1
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:373
github.com/hyperledger/fabric/msp.(*bccspmsp).setupV1-fm
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:112
github.com/hyperledger/fabric/msp.(*bccspmsp).Setup
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:225
github.com/hyperledger/fabric/msp/cache.(*cachedMSP).Setup
/opt/gopath/src/github.com/hyperledger/fabric/msp/cache/cache.go:88
github.com/hyperledger/fabric/msp/mgmt.LoadLocalMspWithType
/opt/gopath/src/github.com/hyperledger/fabric/msp/mgmt/mgmt.go:32
github.com/hyperledger/fabric/peer/common.InitCrypto
/opt/gopath/src/github.com/hyperledger/fabric/peer/common/common.go:143
github.com/hyperledger/fabric/peer/common.InitCmd
/opt/gopath/src/github.com/hyperledger/fabric/peer/common/common.go:309
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).execute
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:746
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:852
github.com/hyperledger/fabric/vendor/github.com/spf13/cobra.(*Command).Execute
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/spf13/cobra/command.go:800
main.main
/opt/gopath/src/github.com/hyperledger/fabric/peer/main.go:53
runtime.main
/opt/go/src/runtime/proc.go:201
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:1333
2019-04-24 20:06:29.798 UTC [main] InitCmd -> ERRO 037 Cannot run peer because error when setting up MSP of type bccsp from directory /etc/hyperledger/fabric/msp: KeyMaterial not found in SigningIdentityInfo
I have tried searching for a solution but wasn't able to find anything. I have little experience with Fabric. Any ideas?
Related
I have a Jupyter notebook on DataProc and I need a jar to run some job. I'm aware of editting spark-defaults.conf and using the --jars=gs://spark-lib/bigquery/spark-bigquery-latest.jar to submit the job from command line - they both work well. However, if I want to directly add jar to jupyter notebook, I tried the methods below and they all fail.
Method 1:
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars gs://spark-lib/bigquery/spark-bigquery-latest.jar pyspark-shell'
Method 2:
spark = SparkSession.builder.appName('Shakespeare WordCount')\
.config('spark.jars', 'gs://spark-lib/bigquery/spark-bigquery-latest.jar')\
.getOrCreate()
They both have the same error:
---------------------------------------------------------------------------
Py4JJavaError Traceback (most recent call last)
<ipython-input-1-2b7692efb32b> in <module>()
19 # Read BQ data into spark dataframe
20 # This method reads from BQ directly, does not use GCS for intermediate results
---> 21 df = spark.read.format('bigquery').option('table', table).load()
22
23 df.show(5)
/usr/lib/spark/python/pyspark/sql/readwriter.py in load(self, path, format, schema, **options)
170 return self._df(self._jreader.load(self._spark._sc._jvm.PythonUtils.toSeq(path)))
171 else:
--> 172 return self._df(self._jreader.load())
173
174 #since(1.4)
/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in __call__(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:
/usr/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
61 def deco(*a, **kw):
62 try:
---> 63 return f(*a, **kw)
64 except py4j.protocol.Py4JJavaError as e:
65 s = e.java_exception.toString()
/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
326 raise Py4JJavaError(
327 "An error occurred while calling {0}{1}{2}.\n".
--> 328 format(target_id, ".", name), value)
329 else:
330 raise Py4JError(
Py4JJavaError: An error occurred while calling o81.load.
: java.lang.ClassNotFoundException: Failed to find data source: bigquery. Please find packages at http://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:657)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: bigquery.DefaultSource
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20$$anonfun$apply$12.apply(DataSource.scala:634)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$20.apply(DataSource.scala:634)
at scala.util.Try.orElse(Try.scala:84)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:634)
... 13 more
The task I try to run is very simple:
table = 'publicdata.samples.shakespeare'
df = spark.read.format('bigquery').option('table', table).load()
df.show(5)
I understand there are many similar questions and answers, but they are either not working or not fitting my needs. There are ad-hoc jars I'll need and I don't want to keep all of them in the default configurations. I'd like to be more flexible and add jars on-the-go. How can I solve this? Thank you!
Unfortunately there isn't a built-in way to do this dynamically without effectively just editing spark-defaults.conf and restarting the kernel. There's an open feature request in Spark for this.
Zeppelin has some usability features for adding jars through the UI but even in Zeppelin you have to restart the interpreter after doing so for the Spark context to pick it up in its classloader. And also those options require the jarfiles to already be staged on the local filesystem; you can't just refer to remote file paths or URLs.
One workaround would be to create an init action which sets up a systemd service which regularly polls on some HDFS directory to sync into one of the existing classpath directories like /usr/lib/spark/jars:
#!/bin/bash
# Sets up continuous sync'ing of an HDFS directory into /usr/lib/spark/jars
# Manually copy jars into this HDFS directory to have them sync into
# ${LOCAL_DIR} on all nodes.
HDFS_DROPZONE='hdfs:///usr/lib/jars'
LOCAL_DIR='file:///usr/lib/spark/jars'
readonly ROLE="$(/usr/share/google/get_metadata_value attributes/dataproc-role)"
if [[ "${ROLE}" == 'Master' ]]; then
hdfs dfs -mkdir -p "${HDFS_DROPZONE}"
fi
SYNC_SCRIPT='/usr/lib/hadoop/libexec/periodic-sync-jars.sh'
cat << EOF > "${SYNC_SCRIPT}"
#!/bin/bash
while true; do
sleep 5
hdfs dfs -ls ${HDFS_DROPZONE}/*.jar 2>/dev/null | grep hdfs: | \
sed 's/.*hdfs:/hdfs:/' | xargs -n 1 basename 2>/dev/null | sort \
> /tmp/hdfs_files.txt
hdfs dfs -ls ${LOCAL_DIR}/*.jar 2>/dev/null | grep file: | \
sed 's/.*file:/file:/' | xargs -n 1 basename 2>/dev/null | sort \
> /tmp/local_files.txt
comm -23 /tmp/hdfs_files.txt /tmp/local_files.txt > /tmp/diff_files.txt
if [ -s /tmp/diff_files.txt ]; then
for FILE in \$(cat /tmp/diff_files.txt); do
echo "$(date): Copying \${FILE} from ${HDFS_DROPZONE} into ${LOCAL_DIR}"
hdfs dfs -cp "${HDFS_DROPZONE}/\${FILE}" "${LOCAL_DIR}/\${FILE}"
done
fi
done
EOF
chmod 755 "${SYNC_SCRIPT}"
SERVICE_CONF='/usr/lib/systemd/system/sync-jars.service'
cat << EOF > "${SERVICE_CONF}"
[Unit]
Description=Period Jar Sync
[Service]
Type=simple
ExecStart=/bin/bash -c '${SYNC_SCRIPT} &>> /var/log/periodic-sync-jars.log'
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
chmod a+rw "${SERVICE_CONF}"
systemctl daemon-reload
systemctl enable sync-jars
systemctl restart sync-jars
systemctl status sync-jars
Then, whenever you need a jarfile to be available everywhere you just copy the jarfile into hdfs:///usr/lib/jars, and the periodic poller will automatically stick it into /usr/lib/spark/jars and then you simply restart your kernel to pick it up. You can add jars to that HDFS directory either by SSH'ing in and running hdfs dfs -cp directly, or simply subprocess out from your Jupyter notebook:
import subprocess
sp = subprocess.Popen(
['hdfs', 'dfs', '-cp',
'gs://spark-lib/bigquery/spark-bigquery-latest.jar',
'hdfs:///usr/lib/jars/spark-bigquery-latest.jar'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = sp.communicate()
print(out)
print(err)
In a Dockerfile, the common way to copy a directory as a non-root user (e.g $UID 1000) is the following:
COPY --chown=1000:1000 /path/to/host/dir/ /path/to/container/dir
However, I want to use variables instead. For example
ARG USER_ID=1000
ARG GROUP_ID=1000
COPY --chown=${USER_ID}:${GROUP_ID} /path/to/host/dir/ /path/to/container/dir
But this is not possible. There exist a workaround?
Note I know that a possible workaround could be to copy the directory as root and then run chown on the directory (variables works fine with RUN). However, the size of the image will grow just for the use of chown in a separate command.
You can create a user before running the --chown;
mkdir -p test && cd test
mkdir -p path/to/host/dir/
touch path/to/host/dir/myfile
Create your Dockerfile:
FROM busybox
ARG USER_ID=1000
ARG GROUP_ID=1000
RUN addgroup -g ${GROUP_ID} mygroup \
&& adduser -D myuser -u ${USER_ID} -g myuser -G mygroup -s /bin/sh -h /
COPY --chown=myuser:mygroup /path/to/host/dir/ /path/to/container/dir
Build the image
docker build -t example .
Or build it with a custom UID/GID:
docker build -t example --build-arg USER_ID=1234 --build-arg GROUP_ID=2345 .
And verify that the file was chown'ed
docker run --rm example ls -la /path/to/container/dir
total 8
drwxr-xr-x 2 myuser mygroup 4096 Dec 22 16:08 .
drwxr-xr-x 3 root root 4096 Dec 22 16:08 ..
-rw-r--r-- 1 myuser mygroup 0 Dec 22 15:51 myfile
Verify that it has the correct uid/gid:
docker run --rm example ls -lan /path/to/container/dir
total 8
drwxr-xr-x 2 1234 2345 4096 Dec 22 16:08 .
drwxr-xr-x 3 0 0 4096 Dec 22 16:08 ..
-rw-r--r-- 1 1234 2345 0 Dec 22 15:51 myfile
Note: there is an open feature-request for adding this functionality:
issue #35018 "Allow COPY command's --chown to be dynamically populated via ENV or ARG"
In my case, I used my UID and GID numbers and it works as I do have the same non-root account in the DEV and PROD environments.
COPY --chown=1000:1000 /path/to/host/dir/ /path/to/container/dir
And you can find the user and group IDs with the linux command: id
Start redis-server:
redis-server /usr/local/etc/redis.conf
Redis config file (/usr/local/etc/redis.conf):
...
requirepass 'foobared'
...
Rails - application.yml:
...
development:
redis_password: 'foobared
...
The Error:
Redis::CommandError - NOAUTH Authentication required.:
...
app/models/user.rb:54:in `last_accessed_at'
...
app/models/user.rb:54:
Line 53 - def last_accessed_at
Line 54 - Rails.cache.read(session_key)
Line 55 - end
and session_key is just an attribute of User model.
BTW:
± ps -ef | grep redis
501 62491 57789 0 1:45PM ttys001 0:00.37 redis-server 127.0.0.1:6379
501 62572 59388 0 1:54PM ttys002 0:00.00 grep --color=auto --exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg --exclude-dir=.svn redis
I got the same error:
I just commented out the requirepass line.
# requirepass 'foobared'
It worked in my case.
I want to create a shell script, that iterates through folders and deletes folders that match [versionnumber-n] where n > 0
the version number is in a file that's content is like:
MAVEN_VERSION=1.2.7.0-SNAPSHOT
Here's an example:
The file listing is like
drwxrwxr-x 4 jenkins jenkins 4096 Jul 29 10:54 ./
drwxrwxr-x 20 jenkins jenkins 4096 Jul 4 09:20 ../
drwxr-xr-x 2 jenkins jenkins 4096 Jul 23 12:35 1.2.6.0-SNAPSHOT/
drwxr-xr-x 2 jenkins jenkins 4096 Jul 28 23:13 1.2.7.0-SNAPSHOT/
-rw-rw-r-- 1 jenkins jenkins 403 Jul 29 10:11 maven-metadata-local.xml
-rw-r--r-- 1 jenkins jenkins 403 Jul 28 23:13 maven-metadata-mtx-snapshots.xml
-rw-r--r-- 1 jenkins jenkins 40 Jul 28 23:13 maven-metadata-mtx-snapshots.xml.sha1
-rw-r--r-- 1 jenkins jenkins 403 Jul 28 23:13 maven-metadata.xml
-rw-r--r-- 1 jenkins jenkins 32 Jul 28 23:13 maven-metadata.xml.md5
-rw-r--r-- 1 jenkins jenkins 40 Jul 28 23:13 maven-metadata.xml.sha1
-rw-r--r-- 1 jenkins jenkins 186 Jul 28 23:13 resolver-status.properties
Where I want the script to delete the folder 1.2.6.0-SNAPSHOT/ but not 1.2.7.0-SNAPSHOT/. If there where folders like 1.2.5.0-SNAPSHOT/ 1.2.4.0-SNAPSHOT/ them too.
What I have at this point:
.*(?!1.2.7.0)(-SNAPSHOT)
Which unfortunately matches both folders (in the example above)
edit: just hit submit too early ...
With Bash you can just use negation with extended pathname expansion.
shopt -s extglob
rm -fr /dir/1.2.!(7).0-SNAPSHOT
Dry run example:
$ ls -1
1.2.10.0-SNAPSHOT
1.2.5.0-SNAPSHOT
1.2.6.0-SNAPSHOT
1.2.7.0-SNAPSHOT
a
$ echo rm -fr 1.2.!(7).0-SNAPSHOT
rm -fr 1.2.10.0-SNAPSHOT 1.2.5.0-SNAPSHOT 1.2.6.0-SNAPSHOT
See Extended Pattern Matching and Filename Expansion.
How I did it in the end:
if [ -z "$MAVEN_VERSION_SERVER" ]
then
echo "\$MAVEN_VERSION_SERVER NOT set! \n exiting ..."
else
find /var/lib/jenkins/.m2/repository/de/db/mtxbes -mindepth 1 -type d -regex '.*SNAPSHOT' -not -name $MAVEN_VERSION_SERVER | xargs -d '\n' rm -fr
fi
(the $MAVEN_VERSION_SERVER gets set and read with groovy scripts before)
mercurial-server runs on Ubuntu 12.04 LTS
myserver#ip:/etc$ hg --version
Mercurial Distributed SCM (version 2.0.2)
myserver#ip:/etc$ dpkg -s mercurial-server
Package: mercurial-server
Version: 1.2-1
....
myserver#ip:/etc/mercurial-server/remote-hgrc.d$ ls -ltr
total 12
-rw-r--r-- 1 root root 180 Oct 10 2011 logging.rc
-rw-r--r-- 1 root root 139 Oct 10 2011 access.rc
-rw-r--r-- 1 root root 74 Mar 13 22:14 check.rc
myserver#ip:/etc/mercurial-server/remote-hgrc.d$ cat check.rc
[hooks]
pretxncommit.author_check = /SOURCE/mercurial-server/validate.sh
#manually added here too
myserver#ip:/etc/mercurial-server/remote-hgrc.d$ cat ~hg/repos/hgadmin/.hg/hgrc
# WARNING: when these hooks run they will entirely destroy and rewrite
# ~/.ssh/authorized_keys
[extensions]
hgext.purge =
[hooks]
changegroup.aaaab_update = hg update -C default > /dev/null
changegroup.aaaac_purge = hg purge --all > /dev/null
changegroup.refreshauth = python:mercurialserver.refreshauth.hook
pretxncommit.author_check = /SOURCE/mercurial-server/validate.sh
myserver#ip:/etc/mercurial-server/remote-hgrc.d$ cat /SOURCE/mercurial-server/validate.sh
#!/bin/bash
echo "REMUSR:$REMOTE_USER"
echo "ATHR:`hg tip --template "{author}\n"`b"
exit 1
myserver#ip:~$ sudo -u hg cat ~hg/.ssh/authorized_keys
no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,command="/usr/share/mercurial-server/hg-ssh root/user1/user1.pub" ssh-rsa AAAAB3xOMN8ZiF user1#server.com
no-pty,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,command="/usr/share/mercurial-server/hg-ssh users/user2/user2.pub" ssh-rsa AAAAB3N..0HchQQw== user2#server.com
After this from a local machine(Windows) I cloned a testproject ,changed,commited,push and it was successfull without any error or message.I tried this with both the initial user/key and a user/key added via hgadmin push
D:\hg\testproj>hg push
pushing to ssh://hg#myserver.com/testproj
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 1 changesets with 1 changes to 1 files
Works with
$ cat check.rc
[hooks]
pretxnchangegroup.author_check = /SOURCE/mercurial-server/validate.sh
Not working with pretxncommit