Clustering Identity Server 5.1.0 with SQL Server - wso2

I am clustering Identity Server 5.1.0 referring to the following link
https://docs.wso2.com/display/CLUSTER44x/Clustering+Identity+Server+5.1.0#ClusteringIdentityServer5.1.0-ClusteringIS
The article mention that we need REGISTRY_LOCAL1 for each node which result in multiple database if I want to create multiple nodes.
Is it necessary to create REGISTRY_LOCAL1 database for each new node?

Yes. REGISTRY_LOCAL db or by default WSO2CarbonDB is used to store the node specific config. That should not be shared among the nodes in the cluster.

Related

How to add a new peer to an existing Hyperledger Fabric network?

When you create a hyperledger fabric network, you define organizations, orderers and peers in crypto-config.yaml and configtx.yaml.
But how do you add a new organization or a new peer to an existing organization in a network that is already setup? Run cryptogen and configtxgen pointing to config files that contain only the new organizations/peers? Re-generate everything?
The whole point of using cryptogen is to help user to settle the crypto material for peer and organizations defined in the crypto-config.yaml file. However one can simply leverage openssl to generate keys and certificates of organization root CA, next generate user certificates and arrange them into the folder similar to what cryptogen is producing and startup your network. Therefore adding a new peer will stand up to simply generating a new set of keys and certificate signed by the root CA. Finally you can simply start new peer and join it to the channel by providing genesis block, which could be fetched from ordering service.
Now, the configtxgen tool helps you to configure your Hyperledger Fabric network it terms of which organizations will form a consortium and will have rights to join the channel. Extending this configuration is a bit more involved process than simply adding a new peer, in order to complete you will have to leverage the configtxlator tool, more details and example of how to use it you can find in the following tutorial. In high level, you will have to read current channel configuration, parse it into the json format, update with new participants, compute the delta and generate configuration update transaction and the last step is to submit the update to the ordering service so it will take effect. Once you will accomplished config update you will be able to add new peers from new organization to the channel.
You can achieve this by generating the crypto material (using cryptogen extends) for the new peer, spawning the new peer and joining that peer to the existing channel on the network to sync-up.
You can find the complete guide at
Extending Hyperledger Fabric Network: Adding a new peer

IBM Block Chain- Car Lease Demo state database location?

I was working on IBM block chain examples and I deployed car-lease-demo sample on a Linux system. I am not able to understand how the database is storing. I see that there is a location "/var/hyperledger/production" where the database is located but I did not find any location like that.
Can anyone explain me how the data is stored and how hyperledger fabric uses the database to store key-value pairs and where is the location of the db where all the data is stored?
Also I would like to know if we can use a different db configuration like NOSQL databases like Neo4j, MongoDB ??
The default implementation uses LevelDB as the backend store for data and is present on all peer nodes. You can enter the docker container in cli mode and see it for yourself.
Yes, you can change the default DB to any other NoSQL DB. Here's is an example of setting up CouchDB with Hyperledger fabric.
As you can see, CouchDB is hosted in a separate container linked to the peer node via an open port (Look at the Docker compose file for details of connection). You can do the same for any other NoSQL DB and use the correct PUT and GET APIs in chain code to access them. But you will have to make sure that data gets replicated in all the DBs in time to maintain the consistency of the Blockchain network

Openvswitch (ovsdb) database migration

We have an Openstack infrastructure consisting of one controller node, eight compute nodes and a network node. This last node is having hardware problems (disk write failures). Unfortunately it has only one disk without replication. And there's no option now to modify it for HA support.
We already tried to "dd" that disk to another but it didn't bring up. So, we agreed that the better choice was to build a new network node (using the same hardware specs)
Failing network node is running the following:
CentOS 7.1.1503
Openstack-neutron-openvswitch-2014.2.2-1 (Juno release)
Openvswitch-2.1.2-2
New network node:
CentOS 7.3.1611
Openstack-neutron-openvswitch-2014.2.3-1 (Juno release)
Openvswitch-2.3.1-2
We managed to export the database, just copying the conf.db file located in /etc/openvswitch into the new node. We had to convert the db to a newer schema since the nodes have different ovs versions. But we can't make it work like the old one, since it adds new interfaces to the database record and doesn't use the ones imported from the old hardware, even having the same exact names.
Is there a way to replicate the ovs configuration in the new node and make it work? That considering that both hardware are the same. Any of you had any experience trying to move/import/export an ovs database? I can attach the database dump if necessary.
I think you must config manually from the beginning, because that database is hashed and encrypted,
when you install openstack throught different machine, the openstack generate keystone randomly,
if you use packstack it the keystone stone can be same with you generate the answer-file
packstack --gen-answer-file=openstack.conf
and adit the option you want in openstack.conf

AWS cassandra nodes cloning

I have built a single cassandra node on AWS and its working fine. We want to build 5 more of it so we have cloned the first one into 5 other cassandra servers. I would like to know what all the changes we possibly need to make so that they will run into 5 new cassandra servers.
1) delete all the data in data directories, saved_caches and commitlog
2) update cassandra.yaml file with listen, broadcast and rpc_addresses.
3) what change should we make at system level like hostname or gateway or any other things for the new nodes????? Kindly suggest these, i don't have much knowledge of system administration.
4) I have stopped the original node cassandra service and ds agent service before taking the clones.
please add any other things i need to change to make the cluster work with new 5 nodes.
Many Thanks.
Datastax has a guide for planning an Amazon EC2 cluster. After reviewing the Datastax guide, make sure you configure the cassandra.yaml file with the correct options. If all of the nodes will have duplicate directory structures you can use the same cassandra.yaml file on each node. Note that making changes to the cassandra.yaml will require a restart.
For more information on configuration of the nodes in a single data center see " Initializing a multiple node cluster in a single data center." For multiple data centers see "Initializing a multiple node cluster, multiple data centers."
If you follow the steps outlined in the documentation and guides you should be able to easily add new nodes to your instance.

WSO2 EMM mysql database setup

I am using WSO2 EMM 1.1.0. The documents talk about using a MySQL instead of H2 https://docs.wso2.com/display/EMM110/Setting+up+MySQL. It talks about editing the master-datasource.xml file and updating the WSO2_CARBON_DB, WSO2_EMM_DB and WSO2AM_DB databases. It then gives steps on priming those db's. But the master-datasource.xml file also contains the WSO2_IDENTITY_DB, SOCIAL_CACHE, SOCIAL_CASSANDRA_DB and JAGH2. I expect all of those can be moved to MySQL as well but I don't see the database scripts to set them up. What is the proper procedures to set up a system that uses MySQL instead of H2? Not to mention that the emm database had the database name hard coded into the setup script "USE WSO2EMM_DB" thus nullifying the master-datasource.xml file.
Thanks,
Brian
It is mentioned in this documentation[1] under the topic 'How to migrate from H2 to MySQL'
[1] - https://docs.wso2.com/display/EMM110/Upgrading+from+a+Previous+Release
You need to configure WSO2EMM_DB, WSO2AM_DB and WSO2CARBON_DB and WSO2IDENTITY_DB if you are going ahead with a larger deployment. H2 is setup just for make the out of the box experience better. You can create those DBs, Configure master_datasources.xml properly for all above DBs. And then run the server with the flag -Dsetup. It will get the configurations done automatically.
If it fails, you can also go to SERVER_HOME/dbscripts folder and find all the scripts for all above databases. Run them separately and run the server in the usual way which mentioned in our documentation.