What is wrong with the setup of Hyperledger Fabric? - blockchain

Because I want to install a new clear version of Hyperledger Fabric, I deleted old Hyperledger file of one month ago, and run "vagrant destroy".
I run "vagrant up", and "vagrant ssh" successfully.
I "make peer" successfully, when I run "peer", if failed.
When I run "make peer" and "peer" again, the error is pop up as below:
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric$ make peer
make: Nothing to be done for `peer'.
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric$ peer
No command 'peer' found, did you mean:
Command 'pee' from package 'moreutils' (universe)
Command 'beer' from package 'gerstensaft' (universe)
Command 'peel' from package 'ears' (universe)
Command 'pear' from package 'php-pear' (main)
peer: command not found
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric$
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric$ cd peer
vagrant#ubuntu-1404:/opt/gopath/src/github.com/hyperledger/fabric/peer$ ls -l
total 60
drwxr-xr-x 1 vagrant vagrant 204 Jun 26 01:16 bin
-rw-r--r-- 1 vagrant vagrant 17342 Jun 25 14:18 core.yaml
-rw-r--r-- 1 vagrant vagrant 35971 Jun 25 14:18 main.go
-rw-r--r-- 1 vagrant vagrant 1137 Jun 23 08:46 main_test.go

The binary peer file's location is ./build/bin/ folder.
For your configuration the full path is "/opt/gopath/src/github.com/hyperledger/fabric/build/bin/"

Let me tell you one thing I observed when I pulled code from gitHub last week, [Thursday to be exact].
Make command had created the executable in "/opt/gopath/src/github.com/hyperledger/fabric/build/bin/". But one pretty thing which I found was, it had copied the same to "/hyperledger/build/bin". And the $PATH variable now included "/hyperledger/build/bin" also.
So to answer your question, you have two options :-
1. one retain your current version of code & Navigate into the bin folder in the fabric directory and see whether peer executable is present there. ? If yes, then execute the rest of the code.
2. Pull the latest copy from gitHub.com and make peer from fabric directory as usual. But execute peer from anywhere. :)

Related

How to Change Permissions: Permission Denied Shiny app on EC2 AWS

I am trying to host a shiny app on an AWS EC2 for the first time. I have been following this [tutorial] (https://www.charlesbordet.com/en/guide-shiny-aws/#3-how-to-configure-shiny-server).
I adjusted my sudo nano /etc/shiny-server/shiny-server.conf with sanitize_errors false; so the errors display at http://18.144.34.215:3838/. It seems I do not have the correct permission allocated to that folder from the shiny-server.
This is my first attempt at hosting a shiny app on EC2 and a bit lost from other posts I have found searching. What would be the correct commands to give permission to this folder?
Also, please let me know what info you need from me in order to understand this error better.
Here are my folder permission for 'RIBBiTR_DataRepository'
-rw-rwSr-- 1 ubuntu ubuntu 35149 Feb 1 21:32 LICENSE
-rw-rwSr-- 1 ubuntu ubuntu 10 Feb 1 21:32 README.md
drwxrwsrwx 5 ubuntu ubuntu 4096 Feb 1 21:38 RIBBiTR_DataRepository
-rw-rwSr-- 1 ubuntu ubuntu 205 Feb 1 21:32 db_forms.Rproj
drwxrwsr-x 2 ubuntu ubuntu 4096 Feb 1 21:32 misc
And to add, when I try to view the logs I receive a permission denied
ubuntu#ip-172-30-1-21:/var/log/shiny-server$ sudo tail RIBBiTR_DataRepository-shiny-20230201-215702-44689.log
su: ignoring --preserve-environment, it's mutually exclusive with --login
-bash: line 1: cd: /srv/shiny-server/db_forms/RIBBiTR_DataRepository: Permission denied
The issue was in my shiny-server.conf file. I updated the file with a user group, run_as ubuntu;

Jenkins Plugins are not installed : Command Line

I am trying to install jenkins plugins from AWS S3 bucket.
Code for installing jenkins plugins :
plugin_manager_url="https://github.com/jenkinsci/plugin-installation-manager-tool/releases/download/2.12.3/jenkins-plugin-manager-2.12.3.jar"
jpath="/var/lib/jenkins"
echo "Installing Jenkins Plugin Manager..."
wget -O $${jpath}/jenkins-plugin-manager.jar $${plugin_manager_url}
chown jenkins:jenkins $${jpath}/jenkins-plugin-manager.jar
cd $${jpath}
mkdir pluginsInstalled
aws s3 cp "s3://bucket/folder-with-plugins.zip" .
unzip folder-with-plugins.zip
echo 'Installing Jenkins Plugins...'
cd plugins/
for plugin in *.jpi; do
java -jar $${jpath}/jenkins-plugin-manager.jar --war /usr/share/java/jenkins.war --plugin-download-directory $${jpath}/pluginsInstalled --plugins $(echo $plugin | cut -f 1 -d '.')
done
chown -R jenkins:jenkins $${jpath}/pluginsInstalled
systemctl start jenkins //before installing plugins Jenkins is installed, which is up and running
IN above code snippet, I unzipped s3 bucket folder, where all plugins are inside "plugins/" folder with .jpi extention so I trimmed that extention while
installing plugins and installed plugins will be in "pluginsInstalled" folder
I have DEV and PROD aws accounts. I will build an AMI using EC2 image builder in DEV account and will share/use that AMI in prod for security reasons.
So, the userdata script for installing jenkins and plugins is part of building AMI. When I check EC2 Image builder's Build Instance, I can see userdata is installed propelry.
But, when I check same AMI which is used in PROD, then I cannot see Jenkins Plugins installed.
Jenkins Version : 2.346.2
And the error log for jenkins is,
java.lang.IllegalArgumentException: No hudson.security.AuthorizationStrategy implementation found for folderBased
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.lambda$lookupDescriptor$11(HeteroDescribableConfigurator.java:211)
at io.vavr.control.Option.orElse(Option.java:321)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.lookupDescriptor(HeteroDescribableConfigurator.java:210)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.lambda$configure$3(HeteroDescribableConfigurator.java:84)
at io.vavr.Tuple2.apply(Tuple2.java:238)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.configure(HeteroDescribableConfigurator.java:83)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.check(HeteroDescribableConfigurator.java:92)
at io.jenkins.plugins.casc.impl.configurators.HeteroDescribableConfigurator.check(HeteroDescribableConfigurator.java:55)
at io.jenkins.plugins.casc.BaseConfigurator.configure(BaseConfigurator.java:350)
at io.jenkins.plugins.casc.BaseConfigurator.check(BaseConfigurator.java:286)
at io.jenkins.plugins.casc.ConfigurationAsCode.lambda$checkWith$8(ConfigurationAsCode.java:776)
at io.jenkins.plugins.casc.ConfigurationAsCode.invokeWith(ConfigurationAsCode.java:712)
at io.jenkins.plugins.casc.ConfigurationAsCode.checkWith(ConfigurationAsCode.java:776)
at io.jenkins.plugins.casc.ConfigurationAsCode.configureWith(ConfigurationAsCode.java:761)
at io.jenkins.plugins.casc.ConfigurationAsCode.configureWith(ConfigurationAsCode.java:637)
at io.jenkins.plugins.casc.ConfigurationAsCode.configure(ConfigurationAsCode.java:306)
at io.jenkins.plugins.casc.ConfigurationAsCode.init(ConfigurationAsCode.java:298)
Caused: java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at hudson.init.TaskMethodFinder.invoke(TaskMethodFinder.java:109)
Caused: java.lang.Error
at hudson.init.TaskMethodFinder.invoke(TaskMethodFinder.java:115)
at hudson.init.TaskMethodFinder$TaskImpl.run(TaskMethodFinder.java:185)
at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:305)
at jenkins.model.Jenkins$5.runTask(Jenkins.java:1158)
at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:222)
at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:121)
at jenkins.security.ImpersonatingExecutorService$1.run(ImpersonatingExecutorService.java:68)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused: org.jvnet.hudson.reactor.ReactorException
at org.jvnet.hudson.reactor.Reactor.execute(Reactor.java:291)
at jenkins.InitReactorRunner.run(InitReactorRunner.java:49)
at jenkins.model.Jenkins.executeReactor(Jenkins.java:1193)
at jenkins.model.Jenkins.<init>(Jenkins.java:983)
at hudson.model.Hudson.<init>(Hudson.java:86)
at hudson.model.Hudson.<init>(Hudson.java:82)
at hudson.WebAppMain$3.run(WebAppMain.java:247)
Caused: hudson.util.HudsonFailedToLoad
at hudson.WebAppMain$3.run(WebAppMain.java:264)
When I check jenkins status on PROD where plugins installed AMI is used, somehow jenkins is not able to restart. It gives following error for jenkins status
Aug 18 21:08:40 ip-10-220-74-95.ec2.internal systemd[1]: Starting Jenkins Continuous Integration Server...
Aug 18 21:08:45 ip-10-220-74-95.ec2.internal jenkins[6656]: Exception in thread "Attach Listener" Agent failed to start!
Aug 18 21:08:50 ip-10-220-74-95.ec2.internal jenkins[6656]: WARNING: An illegal reflective access operation has occurred
Aug 18 21:08:50 ip-10-220-74-95.ec2.internal jenkins[6656]: WARNING: Illegal reflective access by org.codehaus.groovy.vmplugin.v7.Java7$...s,int)
Aug 18 21:08:50 ip-10-220-74-95.ec2.internal jenkins[6656]: WARNING: Please consider reporting this to the maintainers of org.codehaus.g...ava7$1
Aug 18 21:08:50 ip-10-220-74-95.ec2.internal jenkins[6656]: WARNING: Use --illegal-access=warn to enable warnings of further illegal ref...ations
Aug 18 21:08:50 ip-10-220-74-95.ec2.internal jenkins[6656]: WARNING: All illegal access operations will be denied in a future release
The issue was,
I was installing plugins using,
java -jar ./jenkins-plugin-manager.jar --war ./jenkins.war --plugin-download-directory <dir> --plugins <plugins_list>
Here, while it was installing plugins with latest jenkins version.
In my case, I updated targeted jenkins version I am using in our project
sudo java -jar ./jenkins-plugin-manager.jar --jenkins-version <JENNKINS_VERSION> --plugin-download-directory <dir> --plugins <plugins_list>

Issue with elixir-phoenix-on-google-compute-engine

I’m trying to deploy to GCP Compute Engine by following this tutorial
https://cloud.google.com/community/tutorials/elixir-phoenix-on-google-compute-engine
Unable to connect to provided external IP after create firewall-rules
There are no errors in following the tutorial. But cannot connect to http://${external_ip}:8080 after creating firewall rules
Build release is already in Google Cloud Storage
I copy hello
gsutil cp _build/prod/rel/hello/bin/hello\
gs://${BUCKET_NAME}/hello-release
instead of hello.run
gsutil cp _build/prod/rel/hello/bin/hello.run \
gs://${BUCKET_NAME}/hello-release
My instance-startup.sh
#!/bin/sh
set -ex
export HOME=/app
mkdir -p ${HOME}
cd ${HOME}
RELEASE_URL=$(curl \
-s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/release-url" \
-H "Metadata-Flavor: Google")
gsutil cp ${RELEASE_URL} hello-release
chmod 755 hello-release
wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 \
-O cloud_sql_proxy
chmod +x cloud_sql_proxy
mkdir /tmp/cloudsql
PROJECT_ID=$(curl \
-s "http://metadata.google.internal/computeMetadata/v1/project/project-id" \
-H "Metadata-Flavor: Google")
./cloud_sql_proxy -projects=${PROJECT_ID} -dir=/tmp/cloudsql &
PORT=8080 ./hello-release start
gcloud compute instances get-serial-port-output shows
...
Feb 23 18:02:35 hello-instance startup-script: INFO startup-script: + PORT=8080 ./hello-release start
Feb 23 18:02:35 hello-instance startup-script: INFO startup-script: + ./cloud_sql_proxy -projects= hello -dir=/tmp/cloudsql
Feb 23 18:02:35 hello-instance startup-script: INFO startup-script: 2019/02/23 18:02:35 Rlimits for file descriptors set to {&{8500 8500}}
Feb 23 18:02:35 hello-instance startup-script: INFO startup-script: ./hello-release: 31: exec: /app/hello_rc_exec.sh: not found
Feb 23 18:02:39 hello-instance startup-script: INFO startup-script: 2019/02/23 18:02:39 Listening on /tmp/cloudsql/hello:asia-east1:hello-db/.s.PGSQL.5432 for hello:asia-east1: hello-db
Feb 23 18:02:39 hello-instance startup-script: INFO startup-script: 2019/02/23 18:02:39 Ready for new connections
Feb 23 18:08:08 hello-instance ntpd[656]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
hello_rc_exec.sh is generated after initialize Distillery. It is stored in _build/prod/rel/hello/bin/hello_rc_exec.sh
firewall rules
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
default-allow-http-8080 default INGRESS 1000 tcp:8080 False
...
I also run in ps aux | grep erl in the instance
hello_team#hello-instance:~$ ps aux | grep erl
hello_t+ 23166 0.0 0.0 12784 1032 pts/0 S+ 08:04 0:00 grep erl
Im not sure what information is needed to fix this
Please do ask for information and I will provide them.
Thank you
For posterity, here was the solution (worked out in this forum thread).
First, the poster had uploaded the hello file instead of hello.run to cloud storage. The tutorial intentionally specifies uploading hello.run because it is a full executable archive of the entire release, whereas hello is merely a wrapper script and is by itself not capable of executing the app. So this modification to the procedure needed to be reverted.
Second, the poster's app included the elixir_bcrypt library. This library includes a NIF whose platform-specific binary code is built in the deps directory (instead of the _build directory). The tutorial's procedure doesn't properly clean out binaries in deps prior to cross-compiling for deployment, and so the poster's macOS-built bcrypt library was leaking into the build. When deployed to compute engine on Debian, this crashed on initialization. The poster fixed this problem by deleting the deps directory and re-installing dependencies while cross-compiling.
It was also noted during the discussion that the tutorial promoted a poor practice of mounting the user's app in a volume when doing a Docker cross-compilation. Instead, it should simply copy the app into the image, perform the entire build there, and use docker cp to extract the built artifact. This practice would have prevented this issue. A work item was filed to modify the tutorial accordingly.
The solution is here.
Thank you for the help everyone!

What is the default user that codedeploy runs the hook scripts as?

Background: I am facing this error AWS codedeploy deployment throwing "[stderr] Could not open input file" while trying to invoke a php file from the sh file at afterInstall step
In the afterInstall step, I am trying to run a php file from the afterInstall.sh file and I am getting this error - unable to open php file.
I am not sure what exactly to do. Thought of trying to manually check if I could run the file as that user.
The CodeDeploy agent default user is root.
The directory listing below shows the ownership of the deployed files in their destination folder, /tmp, after a successful deployment.
ubuntu#ip-10-0-xx-xx:~$ ls -l /tmp
total 36
-rw-r--r-- 1 root root 85 Aug 2 05:04 afterInstall.php
-rw-r--r-- 1 root root 78 Aug 2 05:04 afterInstall.sh
-rw-r--r-- 1 root root 1397 Aug 2 05:04 appspec.yml
-rw------- 1 root root 3189 Aug 2 05:07 codedeploy-agent.update.log
drwx------ 2 root root 16384 Aug 2 03:01 lost+found
-rw-r--r-- 1 root root 63 Aug 2 05:04 out.log
runas is an optional filed in the AppSpec file. The user to impersonate when running the script. By default, this is the AWS CodeDeploy agent running on the instance(If you don't specify a non-root user, it will be root).
To run host agent as a non-root user, the environment variable CODEDEPLOY_USER needs to be set, as the link to the host agent source code show. The env variable can be set to whatever user you want the host agent to run as.

Cntlmd not starting under systemd on Centos 7.1

Had a weird error trying to start cntlmd on Centos 7.1.
systemctl start cntlmd` results in the following in the logs (and yes, becomming is exactly how it's spelt in the logs :)):
systemd: Started SYSV: Cntlm is meant to be given your proxy address and becoming
Weird thing is:
that it did run initially after installation.
The exact same config works perfectly on another machine (provisioned with Chef so 100% same config).
If I run it in the foreground it works but through systemd, not.
To "fix" it, I had to manually remove and reinstall, whereupon it worked again.
Anybody seen this error (Google reveals nothing) and know what's going on?
I realised that the /var/run/cntlm directory seemed to be "removed" after every boot. Turns out the /var/run/cntlm directory is never created by systemd-tmpfiles on boot (thanks to this SO answer), which then resulted in:
Feb 29 06:13:04 node01 cntlm: Using following NTLM hashes: NTLMv2(1) NT(0) LM(0)
Feb 29 06:13:04 node01 cntlm[10540]: Daemon ready
Feb 29 06:13:04 node01 cntlm[10540]: Changing uid:gid to 996:995 - Success
Feb 29 06:13:04 node01 cntlm[10540]: Error creating a new PID file
because cntlm couldn't write it's pid file because /var/run/cntlm didn't exist.
So to get systemd-tmpfiles to create the /var/run/cntlm directory on boot you need to add the following file in /usr/lib/tmpfiles.d/cntlm.conf:
d /run/cntlm 700 cntlm cntlm
Reboot and Bob's your uncle.