Please Don't close this thread as I could not find solution via similar questions
I'm trying to integrate Jenkins-CI (Hosted on AWS EC2, Ubuntu 14.04), with a private repo but getting the following error log on my test build:
Cloning the remote Git repository
Cloning repository https://github.ccs.neu.edu/CS5500-Spring2017/team7.git
> git init /var/lib/jenkins/workspace/team7_master-JFXDKNF7YKLHDWVSBPWARTJA2O7KU5KHSA5MKXYP6NWMMCP4ODLA # timeout=10
Fetching upstream changes from https://github.ccs.neu.edu/CS5500-Spring2017/team7.git
> git --version # timeout=10
using GIT_ASKPASS to set credentials Team 7
> git fetch --tags --progress https://github.ccs.neu.edu/CS5500-Spring2017/team7.git +refs/heads/*:refs/remotes/origin/*
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress https://github.ccs.neu.edu/CS5500-Spring2017/team7.git +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout:
stderr: fatal: unable to access 'https://github.ccs.neu.edu/CS5500-Spring2017/team7.git/': server certificate verification failed. CAfile: /etc/ssl/certs/ca-certificates.crt CRLfile: none
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1793)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1519)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:64)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:315)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$2.execute(CliGitAPIImpl.java:512)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1061)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1101)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep.checkout(SCMStep.java:109)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:83)
at org.jenkinsci.plugins.workflow.steps.scm.SCMStep$StepExecutionImpl.run(SCMStep.java:73)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1$1.call(AbstractSynchronousNonBlockingStepExecution.java:47)
at hudson.security.ACL.impersonate(ACL.java:221)
at org.jenkinsci.plugins.workflow.steps.AbstractSynchronousNonBlockingStepExecution$1.run(AbstractSynchronousNonBlockingStepExecution.java:44)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Under 'Manage Jenkins -> Configure System' I have Github Server added with API and Credentials I created on Github Personal Access Tokens. This token was added onto Jenkins to test the connection.
Enterprise GitHub server section also has the API and name added correctly.
Testing the connection to my private GitHub account is successful.
I created a test job for build but it fails as it cannot clone into the repo.
Any help would be appreciated.
Related
Any time I try to deploy the contract I get this error
image; https://ibb.co/SfzQ1MW
gitpod /workspace/luck $ npx hardhat run scripts/deploy.js --network testnet
Downloading compiler 0.8.4
Compiling 16 files with 0.8.4
Compilation finished successfully
FetchError: request to https://moeing.tech:9545/ failed, reason: unable to verify the first certificate
at ClientRequest.<anonymous> (/workspace/luck/node_modules/node-fetch/lib/index.js:1461:11)
at ClientRequest.emit (node:events:390:28)
at TLSSocket.socketErrorListener (node:_http_client:447:9)
at TLSSocket.emit (node:events:390:28)
at emitErrorNT (node:internal/streams/destroy:157:8)
at emitErrorCloseNT (node:internal/streams/destroy:122:3)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
type: 'system',
errno: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE',
code: 'UNABLE_TO_VERIFY_LEAF_SIGNATURE'
I will really appreciate your help
The issue is not specifically from hardhat but an issue faced by node-fetch which hardhat uses in the background to send the request. The main cause of this is a problem with your certificates as discussed in this question Unable to verify leaf signature.
What worked for me was to use a VPN, I assume there was an issue with my router. Once I was connected to the VPN I managed to deploy the contract successfully.
I have tried to use docker toolbox to setup Hyperledger V1.0 in my local machines.
I according to this document:
http://hyperledger-fabric.readthedocs.io/en/latest/asset_setup.html
But when I tried to deploy chaincode.
$node deploy.js
I got an error message:
info: Returning a new winston logger with default configurations
info: [Chain.js]: Constructed Chain instance: name - fabric-client1, securityEnabled: true, TCert download batch size: 10, network mode: true
info: [Peer.js]: Peer.const - url: grpc://localhost:8051 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8055 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Peer.js]: Peer.const - url: grpc://localhost:8056 options grpc.ssl_target_name_override=tlsca, grpc.default_authority=tlsca
info: [Client.js]: Failed to load user "admin" from local key value store
info: [FabricCAClientImpl.js]: Successfully constructed Fabric COP service client: endpoint - {"protocol":"http","hostname":"localhost","port":8054}
info: [crypto_ecdsa_aes]: This class requires a KeyValueStore to save keys, no store was passed in, using the default store C:\Users\daniel\.hfc-key-store
[2017-04-15 22:14:29.268] [ERROR] Helper - Error: Calling enrollment endpoint failed with error [Error: connect ECONNREFUSED 127.0.0.1:8054]
at ClientRequest.<anonymous> (C:\Users\daniel\node_modules\fabric-ca-client\lib\FabricCAClientImpl.js:304:12)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at Socket.socketErrorListener (_http_client.js:310:9)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1278:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[2017-04-15 22:14:29.273] [ERROR] DEPLOY - Error: Failed to obtain an enrolled user
at ca_client.enroll.then.then.then.catch (C:\Users\daniel\helper.js:59:12)
at process._tickCallback (internal/process/next_tick.js:103:7)
events.js:160
throw er; // Unhandled 'error' event
^
Error: Connect Failed
at ClientDuplexStream._emitStatusIfDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:201:19)
at ClientDuplexStream._readsDone (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:169:8)
at readCallback (C:\Users\daniel\node_modules\grpc\src\node\src\client.js:229:12)
Is this an question about unable to connect to ca? Or other causes?
Edit:
Environment:
OS: Windows 10 Professional Edition
Docker Toolbox: 17.04.0-ce
Go: 1.7.5
Node.js: 6.10.0
My steps:
1.Open Docker Quickstart Terminal and key commands.
$curl -L https://raw.githubusercontent.com/hyperledger/fabric/master/examples/sfhackfest/sfhackfest.tar.gz -o sfhackfest.tar.gz 2> /dev/null; tar -xvf sfhackfest.tar.gz
$docker-compose -f docker-compose-gettingstarted.yml build
$docker-compose -f docker-compose-gettingstarted.yml up -d
$docker ps
It has been confirmed that six containers have been activated
2.Download examples and install modules.
$curl -OOOOOO https://raw.githubusercontent.com/hyperledger/fabric-sdk-node/v1.0-alpha/examples/balance-transfer/{config.json,deploy.js,helper.js,invoke.js,query.js,package.json}
//This link didn't work, so I downloaded the required files from GitHub of fabric-sdk-node
$npm install --global windows-build-tools
$npm install
3.Try to deploy chaincode.
$node deploy.js
There were several problems, not the least of which that documentation was outdated and was for a preview release of Hyperledger Fabric. The docs are actually in the process of being removed as we need to update our examples / samples.
You mentioned Docker Toolbox - so are you trying to run all of this on Windows or Mac?
UPDATE:
So one of the issue with Docker Toolbox or Docker for Windows is that you cannot use localhost / 127.0.0.1 as the address when trying to communicate from apps on the host (even in the QuickStart Terminal) to the endpoints of the Docker containers. When the QuickStart Terminal first launches Docker, you'll see that it will output the IP address of the endpoint you should use when communicating with exposed ports.
I was having the same issue while following the latest "Writing Your First Application" tutorial (http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html). I had installed all the pre-requisites and the fabric-samples and started the local network.
When I got to the step of enrolling the Admin user, $ node enrollAdmin.js, I was getting the same error message as above, Error: connect ECONNREFUSED, followed by the localhost domain.
As the first answer suggests, the root cause is that I'm running Docker Toolbox. I'm developing on an older Mac, OSX v10.9.5, so I couldn't use Docker for Mac.
To fix the issue, I replaced 'localhost' in the enrollAdmin.js code with the IP from Docker Toolbox.
Here are the steps I took:
Started Docker with Applications > Docker Quickstart Terminal
Copied the IP from this sentence: docker is configured to use the default machine with IP...
Opened the copy of enrollAdmin.js from fabric-samples/fabcar directory
Found this code:
// be sure to change the http to https when the CA is running TLS enabled
fabric_ca_client = new Fabric_CA_Client('http://localhost:7054', tlsOptions , 'ca.example.com', crypto_suite); // <-- This is the line to change
Replaced 'localhost' with the Docker IP, leaving the port :7054 as is.
Saved
Re-ran the command, $ node enrollAdmin.js
The script connected to the CA and successfully completed the Admin enrollment.
On to the next step!
I installed cloudera vm and started trying some basic stuff. First I just wanted to ls the hdfs directoires. so I issued the below command.
[cloudera#quickstart ~]$ hadoop fs -ls /
ls: Failed on local exception: java.net.SocketException: Network is unreachable; Host Details : local host is: "quickstart.cloudera/10.0.2.15"; destination host is: "quickstart.cloudera":8020;
though ps -fu hdfs says both namenode and data node is running. I checked the status using the service command.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
Thinking all the problems will be resolved if I restart all the services, I executed the below command.
[cloudera#quickstart conf]$ sudo /home/cloudera/cloudera-manager --express --force
[QuickStart] Shutting down CDH services via init scripts...
[QuickStart] Disabling CDH services on boot...
[QuickStart] Starting Cloudera Manager daemons...
[QuickStart] Waiting for Cloudera Manager API...
[QuickStart] Configuring deployment...
Submitted jobs: 92
[QuickStart] Deploying client configuration...
Submitted jobs: 93
[QuickStart] Starting Cloudera Management Service...
Submitted jobs: 101
[QuickStart] Enabling Cloudera Manager daemons on boot...
Now I thought all services will be up so again checked the status of namenode service. Again it came failed.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
Now I decided to manually stop and start the namenode service. Again not much use.
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode stop
no namenode to stop
Stopped Hadoop namenode: [ OK ]
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode status
Hadoop namenode is not running [FAILED]
[cloudera#quickstart ~]$ sudo service hadoop-hdfs-namenode start
starting namenode, logging to /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out
Failed to start Hadoop namenode. Return value: 1 [FAILED]
I checked the file /var/log/hadoop-hdfs/hadoop-hdfs-namenode-quickstart.cloudera.out . It just said below
log4j:ERROR Could not find value for key log4j.appender.RFA
log4j:ERROR Could not instantiate appender named "RFA".
I also checked /var/log/hadoop-hdfs/hadoop-cmf-hdfs-NAMENODE-quickstart.cloudera.log.out . Found below when I searched for error. Can anyone please suggest me what is the best way to get the services back on track. Unfortunately I am not able to access cloudera manager from browser. Anything that I can do from command line?
2016-02-24 21:02:48,105 WARN com.cloudera.cmf.event.publish.EventStorePublisherWithRetry: Failed to publish event: SimpleEvent{attributes={ROLE_TYPE=[NAMENODE], CATEGORY=[LOG_MESSAGE], ROLE=[hdfs-NAMENODE], SEVERITY=[IMPORTANT], SERVICE=[hdfs], HOST_IDS=[quickstart.cloudera], SERVICE_TYPE=[HDFS], LOG_LEVEL=[WARN], HOSTS=[quickstart.cloudera], EVENTCODE=[EV_LOG_EVENT]}, content=Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!, timestamp=1456295437905} - 1 of 17 failure(s) in last 79302s
java.io.IOException: Error connecting to quickstart.cloudera/10.0.2.15:7184
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:249)
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:198)
at com.cloudera.cmf.event.shaded.org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:133)
at com.cloudera.cmf.event.publish.AvroEventStorePublishProxy.checkSpecificRequestor(AvroEventStorePublishProxy.java:122)
at com.cloudera.cmf.event.publish.AvroEventStorePublishProxy.publishEvent(AvroEventStorePublishProxy.java:196)
at com.cloudera.cmf.event.publish.EventStorePublisherWithRetry$PublishEventTask.run(EventStorePublisherWithRetry.java:242)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.SocketException: Network is unreachable
You can try this:
check witch process is using the port 7184 of namenode (i.e netstat linux command)
and kill that and then restart
Or
change you namenode port from conf and restart hadoop
I don't know why but I cannot seem to figure out why this is happening. I can build and run the docker image locally.
Recent Events:
2015-05-25 12:57:07 UTC+1000 ERROR Update environment operation is complete, but with errors. For more information, see troubleshooting documentation.
2015-05-25 12:57:07 UTC+1000 INFO New application version was deployed to running EC2 instances.
2015-05-25 12:57:04 UTC+1000 INFO Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
2015-05-25 12:57:04 UTC+1000 ERROR [Instance: i-4775ec9b] Command failed on instance. Return code: 1 Output: (TRUNCATED)... run Docker container: vel="fatal" msg="Error response from daemon: Cannot start container 02c057b331bf3a3d912bf064f1dca3e00c95746b5748c3c4a28a5c6b452ff335: [8] System error: exec: \"bin/app\": permission denied" . Check snapshot logs for details. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
2015-05-25 12:57:03 UTC+1000 ERROR Failed to run Docker container: vel="fatal" msg="Error response from daemon: Cannot start container 02c057b331bf3a3d912bf064f1dca3e00c95746b5748c3c4a28a5c6b452ff335: [8] System error: exec: \"bin/app\": permission denied" . Check snapshot logs for details.
Dockerfile:
FROM java:8u45-jre
MAINTAINER Terence Munro <terry#zenkey.com.au>
ADD ["opt", "/opt"]
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon:daemon", "."]
USER daemon
ENTRYPOINT ["bin/app"]
EXPOSE 9000
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
],
"Volumes": []
}
Additional logs as attachment at: https://forums.aws.amazon.com/thread.jspa?threadID=181270
Any help is extremely appreciated.
#nick-humrich suggestion of trying eb local run worked. So using eb deploy ended up working.
I had previously been uploading through the web interface.
Initially using eb deploy was giving me a ERROR: TypeError :: data must be a byte string but I found this issue which was resolved by uninstalling pyopenssl.
So I don't know why the web interface was giving me permission denied perhaps something to do with the zip file?
But anyway I'm able to deploy now thank you.
I had a similar problem running Docker on Elastic Beanstalk. When I pointed CMD in the Dockerfile to a shell script (/path/to/my_script.sh), the EB deployment would fail with
/path/to/my_script.sh: Permission denied.
Apparently, even though I had run RUN chmod +x /path/to/my_script.sh during the Docker build, by the time the image was run, the permissions had been changed. Eventually, to make it work I settled on:
CMD ["/bin/bash","-c","chmod +x /path/to/my_script.sh && /path/to/my_script.sh"]
I am working on testing my Rails app which has bower-rails gem on wercker.com.
The wercker.yml has the following setting.
- script:
name: rake bower:install
code: bundle exec rake bower:install
And the bower.json on the app has some dependencies which have my private git repository.
So, during the test an error occurs because the wercker couldn't access to the private repository. The error was like below.
bower foo#* ECMDERR Failed to execute "git ls-remote --tags --heads git#bitbucket.org:myrepository/foo.git", exit code of #128 Warning: Permanently added the RSA host key for IP address 'xxx.xxx.xxx.xxx' to the list of known hosts. Permission denied (publickey). fatal: The remote end hung up unexpectedly
Additional error details:
Warning: Permanently added the RSA host key for IP address 'xxx.xxx.xxx.xxx' to the list of known hosts.
Permission denied (publickey).
fatal: The remote end hung up unexpectedly
rake aborted!
Any solutions?