I am trying to run corda as a Windows service. I followed all the steps in documentation. As per the steps the node configuration and corda.jar is invoked by a NSSM service manager. Nowhere it is mentioned to start the controller node. I assume a controller node should be running as a prerequisite.
In node.conf file
networkMapService {
address="networkmap.foo.bar.com:10002"
legalName="O=FooBar NetworkMap, L=Dublin, C=IE" }
networkMapService is pointing to some address. So should I deploy and run the Cordapp before I run the nssm.bat file ?
However I when I opened the log file I see the below error though I have the certificates in place.
Exception during node startup
java.lang.IllegalArgumentException: Identity certificate not found. Please either copy your existing identity key and certificate from another node,
or if you don't have one yet, fill out the config file and run corda.jar --initial-registration.
I am clueless. Please someone help me understand this process.
There are a few issues with the docs there:
When deploying a node, we are assuming that you are going to provision real certificates to the node. This step is only documented in the Linux instructions (See step 11: 11. Provision the required certificates to your node. Contact the network permissioning service or see Network Permissioning). You can create your own certificates by following the instructions here (https://docs.corda.net/permissioning.html)
We are assuming that there is already a node operating a network map service at the address listed in node.conf
I'll raise a PR to fix these issues.
If you are interested in running nodes across different machines in dev mode instead, see https://docs.corda.net/tutorial-cordapp.html#running-nodes-across-machines.
Related
I am attempting to take my existing cloud composer environment and connect to a remote SQL database (Azure SQL). I've been banging at my head at this for a few days and I'm hoping someone can point out where my problem lies.
Following the documentation found here I've spun up a GKE Service and SQL Proxy workload. I then created a new airflow connection as show here using the full name of the service azure-sqlproxy-service:
I test run one of my DAG tasks and get the following:
Unable to connect: Adaptive Server is unavailable or does not exist
Not sure on the issue I decide to remote directly into one of the workers, whitelist that IP on the remote DB firewall, and try to connect to the server. With no command line MSSQL client installed I launch python on the worker and attempt to connect to the database with the following:
connection = pymssql.connect(host='database.url.net',user='sa',password='password',database='database')
From which I get the same error above with both the Service and the remote IP entered in as host. Even ignoring the service/proxy shouldn't this airflow worker be able to reach the remote database? I can ping websites but checking the remote logs the DB doesn't show any failed logins. With the generic error and not many ideas on what to do next I'm stuck. A few google results have suggested switching libraries but I'm not quite sure how, or if I even need to, within airflow.
What troubleshooting steps could I take next to get at least a single worker communicating to the DB before moving on the the service/proxy?
After much pain I've found that Cloud composer uses ubuntu 1804 which currently breaks pymssql as per here:
https://github.com/pymssql/pymssql/issues/687
I tried downgrading to 2.1.4 to no success. Needing to get this done I've followed the instructions outlined in this post to use pyodbc.
Google Composer- How do I install Microsoft SQL Server ODBC drivers on environments
I have followed all the steps as per the instructions. All the key services are running like zookeeper, mongodb. I have verified mutliple times.
However when it comes to start kaa-node, then service always fails with exit code.
I have verified several items including IP address etc on the Digital Ocean VPS. Validated the firewall setup.
That might be caused by lack of necessary resources like memory or disk space. Please check the minimal requirements on the Getting started documentation page.
If that is not an issue, please provide more details on the error and logs you get from the kaa-node startup.
I created a network with 4 peers using docker-compose and docker for Mac.
I deploy my blockchain on this network successfully.
Now I'm launching a 5th peer using another yml file using the details of one of the previous peer as discovery node.
It appears in the list returned by http://localhost:7050/network/peers however my blockchain is not deployed on this peer and I cannot use it to process transactions.
Do I have to deploy the chaincode again on this peer? Did I miss something?
This is limitation in Fabric’s versions 0.5 and 0.6
Network configuration cannot be changed in realtime. In case If you use PBFT consensus, network configuration is hardcoded in:
“fabric/consensus/pbft/config.yaml"
# Maximum number of validators/replicas we expect in the network
# Keep the "N" in quotes, or it will be interpreted as "false".
"N": 4
The challenge is in updating configuration on all peers synchronously, otherwise they will not be able to reach consensus.
In one of next Fabric versions this configuration’s parameter will be moved to blockchain and it will be possible to add new peers and modify consensus configuration on the fly.
Update for question in comment:
Saw only this high level Roadmap proposal:
I'm currently deploying some wso2as cluster, and am facing a strange problem with URL mapping.
I have setup two worker nodes (named was0 and was1), a manager node (named mgt) and an ELB (named elb).
The installation seems working fine, as I'm able to call URL mapped on load balancer like the following : http://was0.domain/services/... , was0.domain being mapped on the load balancer IP on the station accessing this address (outside the cluster).
When I call services on this endpoint, I'm able to load balance as I can notice that my wsdl has enpoints based on was0 and was1. The two worker nodes are pretty detected as application nodes on the ELB.
The problem I encounter however is that when I use was0 based URL, it works fine, but when I try to use the was1 one, the load balancer returns a blank page, and I don't notice any error in logs. I have both hosts was1 and was0 defined in my cluster configuration as application members for AS.
If I try from the ELB node to access the was1 based webservice directly on the WAS, I'm able to access it without problem (so the service is working on was1 node, and this node is also detected and registered inside the cluster, but not accessible through cluster).
Finally, this results in one call working when round robin targets was0, and one call not working when targetting was1.
So I'm currently wondering if I understood well the cluster behavior: should it work for both application servers mapped URL, or is it normal to have only the first was0 responding with success? How could I force generated WSDL to return a valid endpoint URL?
What I understood by reading documentation is that I need mapping WAS URLs on the ELB, and that this one will then balance on all WAS servers, but it doesn't seem working like that.
Please tell me if you need some configuration part, diagram or example, I didn't paste it here because it's quite big :)
For information, I had the same problem when balancing through 2 wso2esb worker nodes, but was able to solve it by forcing WSDL URLs prefix by the first node URL (esb0) with the WSDLEPRPrefix in ESB configuration. As I don't have a such setting in wso2as, I don't know how accessing the URL returned in WSDL.
Thank you by advance for your help,
BOUCNIAUX Benjamin
After installing VMware Server I get the following error when I try to access the VMware web-based server manager:
The VMware Infrastructure Web Service
at "http://localhost:8222/sdk" is not
responding
Go into the services manager and check that the 'VMware Host Agent' service is running. If not, then start it and then try browsing to the site again.
Vmware Hostd was not working for me either.
However, in trying to start the service it stopped automatically. Typically when this happens it is because there is an error in your config.xml.
C:\ProgramData\VMware\VMware Server\hostd\config.xml
In my case, checking the logs at:
C:\ProgramData\VMware\VMware Server
showed it erroring out after "Trying hostsvc".
Searching the config.xml for hostsvc showed references to several things, the first thing was the datastore. In checking my datastores.xml file:
C:\ProgramData\VMware\VMware Server\hostd\datastores.xml .
I found it full of all sorts of random characters instead of a properly formed XML document.
Renaming datastores.xml to datastorex.xml.bad allowed me to start the service. At which point I had to add back my datastores through the GUI.
Hopefully this will help someone else out. I did not find any other references in Google to this issue.
Try accessing via "http://localhost:8222" without the /sdk. You can also try the secure site via "https://localhost:8333".