BlockChain : Issue setting up dev environment for hyperledger fabric - blockchain

I am a beginner in the blockchain technology and wanted to get some hands on by setting up a dev environment for hyperledger fabric.
I tried to setup the dev environment and was following the official documentation at https://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
In the step when I call the script network_setup.sh to bring the network up, It fails at the last step with an error as -
...
...
Trying to pull repository docker.io/hyperledger/fabric-testenv ..
Pulling repository docker.io/hyperledger/fabric-testenv
ERROR: Error: image hyperledger/fabric-testenv not found
The image itself is not available in the repository and hence the script fails.
Can someone guide me on how to overcome this and where can I find good references for setting up a fabric dev environment.

Yes, something blew up, so the URL had to be changed. The instructions have been updated, and the curl command points to a new URL now. Try it again, should work now. change logged here https://gerrit.hyperledger.org/r/#/c/8591/

Related

How to get a local Cloud Foundry Instance?

I’m looking to learn about Cloud Foundry and I’m trying to get a development instance of it set up on my local Windows 10 PC. But I’m not having any luck.
I’m finding a lot of information about PCF Dev which was deprecated a while ago. I also looked at the replacement for PCF Dev, CF Dev (https://github.com/cloudfoundry-attic/cfdev). Its git page mentions that its repository is no longer receiving updates. I still went ahead and tried installing it using the instructions in the README:
cf install-plugin -r CF-Community cfdev
But the link it uses to download the plugin is broken:
Starting download of plugin binary from repository CF-Community...
Get "https://d3p1cc0zb2wjno.cloudfront.net/cfdev/cfdev-v0.0.18-rc.36-windows.exe": dial tcp: lookup d3p1cc0zb2wjno.cloudfront.net: no such host
Can anyone recommend a way to get a development instance of Cloud Foundry set up on my local machine so I can play around with it?
Thanks
Yes, steer clear of pcf-dev and cf-dev, they may still work but are definitely not getting updates so will be way out of date by now.
My understanding, although I haven't tried this process in a while, is that the way to run locally is with VirtualBox. You can run one locally using bosh-deployment & cf-deployment and Virtualbox.
For instructions installing Bosh in VirtualBox using bosh-deployment, see the Install Section to install Bosh.
With Bosh installed, follow the deployment guide to get CF installed. You can skip to step 4, since you're installing into VirtualBox. Be sure to read the entire document before you begin, however pay specific attention to this section which has specific instructions for running locally.

Unable to push to Google Container Registry - Permission issue

I'm having the sample problem as Vaclav. I've followed the GCR quick start to the letter which entailed creating a new project (called gcr-project) and copying the code for a Flask (python) app.
After building the docker image, I entered the commands:
gcloud auth configure-docker
docker tag quickstart-image gcr.io/gcr-project/quickstart-image:tag1
docker push gcr.io/gcr-project/quickstart-image:tag1
The response was:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So it would be nice to know if the issue is with the credentials (I'm using cloud SDK OK for other projects) or permissions. The documentation here suggests you need storage-admin rights but the projects already has it, see screen cap here
Would appreciate any tips for trouble shooting this as I was looking for to using the GCR but this problem is a hard stop for me.
UPDATE:
I tried the same process with the cloud shell
me#cloudshell:~ (gcr-project-XXXXXX)$ docker push gcr.io/gcr-project/quickstart-image:tag1
The push refers to repository [gcr.io/gcr-project/quickstart-image]
4399528b7213: Preparing
1d10b1eeca74: Preparing
75156020d862: Preparing
c5697656a146: Preparing
2a435270de82: Preparing
c35f70b5c25a: Waiting
28e260baaf1b: Waiting
556c5fb0d91b: Waiting
denied: Token exchange failed for project 'gcr-project'. Please enable Google Container Registry API in Cloud Console at https://console.cloud.google.com/apis/api/containerregistry.googleapis.com/overview?project=gcr-project before performing this operation.
me#cloudshell:~ (gcr-project-XXXXXX)$
This prompted me to check the API & Services dashboard to confirm the container-registry API was enabled - It is.
UPDATE 2:
I'm having these problems on a machine running ubuntu 19.04. Per the comments below I was able to do a push via the cloud shell. So I then went through the same exercise on a MacBook Pro - worked no problems.
So I then uninstalled Cloud SDK per the doco having used the standard linux install instructions previously. I then re-installed using the debian-ubuntu install instructions (version 274.0.1-0)... STILL no go.
When I do a docker pull on the image (because push worked on MBP) I get this error: Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
And when I do a push I get this error: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
So at this stage, given the success on the MBP and the lack thereof on the linux/ubuntu machine, the problem is constrained to to linux/ubuntu installs.
UPDATE 3:
I got on to a separate ubuntu server, did a clean install with sudo snap install google-cloud-sdk --classic , did everything else per the docs and still had the exact same problem. So I recon this is a linux google cloud SDK specific problem.
Is there anyone out there Ubuntu land who as been able install and use cloud SDK with GCR recently?????????
I was able to replicate this issue on multiple ubuntu machines. I tried again after the most recent cloud SDK update (276.0.0) but had no luck.
In the end I went with json key file authentincati described in the docs here as a work around which worked fine.

Github Desktop. Remote end hung up unexpectedly

I am using github desktop and the website. no git code, and I created a repository for my unreal engine 4 c++ project. I then try to publish the repository to github but I get this error
I have seen many posts with this error but none that use github desktop, just git code and it is not what im using.
I use windows, and also I cannot clone unreal engine c++ repositories either that I created at the college PCs.
thats the best I can ask sorry if my question is vague
error when publishing repository:
`https://pastebin.com/Rzdfbrwp`
error when cloning a repository from github (repository made in college pc)
`https://pastebin.com/72S18rD5`
You need to clone the repository with ssh.
Run the following command and remove your repository:
git remote rm origin
Then, try the command below and push afterwards:
git remote add origin git#github.com:username/project.git
It appears the internet I was using had protection on some websites because of the house policies of a student house. because of this it was messing up with big repositories and stopping the cloning.
not a github problem just figured it was the internet since it works fine in other internet. thank you all for you help it was my bad

Update Parse-Server-Example

I thought it would be a topic to find easily on the web, but I couldnt find a solution..
I deployed the parse-server-example on AWS Elastic Beanstalk according to the original documentation and it works perfectly. Can anyone give me a hint how to update this server to the newest version? I try to use the parse-dashboard and I get the error "server version too low".
I cloned the parse server with eb cli already. But I do not know how / which files to update.
Thanks for any hint!
In package.json, you update the version next to 'parse-server'. I think by default this is '~2.0'?
Parse Dashboard requires Parse-Server to be '>=2.1.4', HOWEVER, currently I'm having issues when changing the parse-server version, it breaks my AWS server instance. Currently have an issue open on GitHub (https://github.com/ParsePlatform/parse-server-example/issues/109#issuecomment-198001722), so keep an eye on that.
But yeah, that's where you update your Parse-Server version, I believe!
Once you've done this locally on your machine, you obviously need to deploy the updates to AWS via the Beanstalk Dashboard, as this will install/update any node modules from package.json.

How to use vagrant to develop on django locally and then deploy to EC2/Azure?

I chose Vagrant so that other developers in my team can quickly start contributing to the project. Is there anyway we can also make it easy for the developed code to be deployed on EC2 or Azure servers? If there are any articles on the optimal setup, please point me to them. Thanks!
The first video of Getting started with Django shows how to use Vagrant for locally Django developing and how to use it for deploying it to Heroku, you may want to use the first part of the tutorial (the one related with the local development). For the second it depends how you are going to deploy it, but as long as your code will be in a Git repository, you could clone it to EC2/azure from git.