all. I have some confusion about the registry.
1.the remote registry mount is done like this
in [1]
but it is done like
in [2] with the port NO. and /registry. are they the same??
2.I gonna install apim and IS and GREG, apim and IS should share their infomation, so that when a new tenant is registered in apim, IS should be able to use this new tenant too. My question is whether both the config and governance of both server should be configurated to GREG? because I don't know which (config or governance) folder contains the user resources?
[1] http://docs.wso2.org/display/CLUSTER420/Clustering+API+Manager
[2] http://docs.wso2.org/display/Governance453/Governance+Partition+in+a+Remote+Registry
To answer your second question, If you wanted to share same tenant information then you need to share user store and the realm database. One way of doing that is, pointing to same ldap from all nodes and refering central database for realm.
By default, IS has embedded ldap and if you do not have external/central ldap or user database that embedded ldap of IS can be use as the central user store. To do so, copy element in <IS_HOME>/repository/conf/user-mgt.xml and replace the element of user-mgt.xml in other nodes. You need to change ConnectionURL appropriately with hostname and port. If you have started IS without any port offset and all nodes run on same server, you can use
<Property name="ConnectionURL">ldap://localhost:10389</Property>
or, if the IS run on some other machine having the ip 192.168.33.66 and start with portoffset 1, connection url would be like follows,
<Property name="ConnectionURL">ldap://192.168.33.66:10380</Property>
And you need to share registry database also. To do so create a central database and create a datasource with a name like WSO2_REALM_DB and refer to jndi name of that datasource in user-mgt xml by changing the property <Property name="dataSource">jdbc/WSO2_REALM_DB</Property>. From this post you can find steps to create database and configuring datasource for that db.
Its not clear on your first question but regarding registry mounting in general,
Local partition is used to store resources specific to a node. And its
usually do not shared among nodes.(ie. APIM node 1 may have local registry 1 and APIM node 2 have local registry 2 while IS node 1 have local registry 3)
Config partition is used to store resources for specific product and its usually shared among nodes in same cluster. (ie. APIM node 1 may have config registry 1 and APIM node 2 also points to config registry 1 while IS node 1 have config registry 2)
Governance partition is used to share resources among different products.(ie. APIM node 1, APIM node 2 and IS node 1 all points to same gov registry 1)
You can more detailed explanation in this article.
Related
According to the official Nifi documentation, the state allows Nifi processors to "resume from the place where it left off after NiFi is restarted. Additionally, it allows for a Processor to store some piece of information so that the Processor can access that information from all of the different nodes in the cluster".
If my understanding is good, when we configure a zookeeper Provider, the state will not be persisted locally, instead, the data will be sent to zookeeper.
I've explored the zookeeper znodes and could not find any data related to the state, all I can find are the informations about the Coordinator and Primary nodes. However, the local state directory is still filled.
The configuration is very simple, I've 3 external ZK nodes and 3 Nifi instances.
Here is an exerpt of the nifi.properties file:
nifi.cluster.is.node=true
nifi.zookeeper.connect.string=zk-node1:2181,zk-node2:2181,zk-node3:2181
nifi.state.management.embedded.zookeeper.start=false
nifi.state.management.provider.cluster=zk-provider
And here is an exerpt of the state-management.xml file:
<cluster-provider>
<id>zk-provider</id>
<class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">zk-node1:2181,zk-node2:2181,zk-node3:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider>
When I try to ls the Zookeeper, I can see only 2 znodes: "components" but this znode is empty and the "leaders" zonde which contain some data about the Nifi Coordinator and Primary Nodes.
Also, when I explore the transactions logs, even after using some load balanced connections, I cannot find anything related to the Nifi State.
Could somebody explain what data goes the Zookeeper and why the local state directory is still filled even if we configure the zk provider ?
Thanks.
It depends on the processor, some cases it would never make sense to store cluster wide state because it could never be picked up by another node. For example, ListFile tracking from a local directory, another node cannot access the same directory so storing this state in ZK is not helpful.
There is always a local state provider in a write-ahead-log in the state directory, and it is up to the processor to say whether it should be cluster or local state when storing it.
The documentation for each processor should say how the state is stored. For example, from ListFile:
#Stateful(scopes = {Scope.LOCAL, Scope.CLUSTER}, description = "After performing a listing of files, the timestamp of the newest file is stored. "
+ "This allows the Processor to list only files that have been added or modified after "
+ "this date the next time that the Processor is run. Whether the state is stored with a Local or Cluster scope depends on the value of the "
+ "<Input Directory Location> property.")
If Input Directory Location is "remote" then it will use cluster state, otherwise local state.
In Node-Red, I'm using some Amazon Web Services nodes (from module node-red-node-aws), and I would like to read some configuration settings from a file (e.g. the access key ID & the secret key for the S3 nodes), but I can't find a way to set everything up dynamically, as this configuration has to be made in a config node, which can't be used in a flow.
Is there a way to do this in Node-Red?
Thanks!
Unless a node implementation specifically allows for dynamic configuration, this is not something that Node-RED does generically.
One approach I have seen is to have a flow update itself using the admin REST API into the runtime - see https://nodered.org/docs/api/admin/methods/post/flows/
That requires you to first GET the current flow configuration, modify the flow definition with the desired values and then post it back.
That approach is not suitable in all cases; the config node still only has a single active configuration.
Another approach, if the configuration is statically held in a file, is to insert them into your flow configuration before starting Node-RED - ie, have a place-holding config node configuration in the flow that you insert the credentials into.
Finally, you can use environment variables: if you set the configuration node's property to be something like $(MY_AWS_CREDS), then the runtime will substitute that environment variable on start-up.
You can update your package.json start script to start Node-RED with your desired credentials as environment variables:
"scripts": {
"start": "AWS_SECRET_ACCESS_KEY=<SECRET_KEY> AWS_ACCESS_KEY_ID=<KEY_ID> ./node_modules/.bin/node-red -s ./settings.js"
}
This worked perfect for me when using the node-red-contrib-aws-dynamodbnode. Just leave the credentials in the node blank and they get picked up from your environment variables.
I have setup the cluster for WSO2-IS (2 instances on different machines) based on the information provided here - https://docs.wso2.com/display/CLUSTER44x/WSO2+Clustering+and+Deployment+Guide
Setup DB with a user store, shared registry, 2 local registries
Copied the DB driver jar to component lib
Updated the master-datasource.xml
Updated the registry.xml (made sure the master is read-only false and worker is read-only true)
Updated the AXIS2.xml and used WKA for membership scheme
Performed other changes as suggested in the link
Started the master with -Dsetup option and the worker without -Dsetup option.
Verified that the governance folder is shown as a symlink
I can see the interaction between both the nodes, there are Hazelcast messages related to node joining when the worker is started.
User created in 1 is able to login to the other instance, service provider are also automatically available when viewed through UI.
The problem is that when I create a secondary user store (JDBC) in the first node and goto the list in the second node - the secondary user store is not present and I cannot view the users in the user list too.
Am I missing something or is it the way the cluster is supposed to perform i.e. secondary user stores have to be shared in some other way?
Thanks,
Vikas
Secondary user store configurations are not synced between two nodes by default. Once you create a secondary user store from UI, it will create a file in following location.
[WSO2_IS]/repository/deployment/server/userstores/
These configuration file need to copy by manually or have to use some synchronization mechanism to copy file to other node. since this is not a frequent task better to copy this file.
Fore more information
https://docs.wso2.com/display/IS500/Configuring+Secondary+User+Stores
The Sitecore Guide states this:
To ensure that Sitecore automatically updates the link database in the
CD environment:
*The CD and CM instances must use the same name to refer to the publishing target database across the environments (typically Web).
One of the following conditions should be met:
**The Core database should be shared or replicated between the CM and CD instances.
** The Link database data should be configured to be stored in a database which is shared between CM and CD publishing target database
(typically Web).
Two things aren't clear to me:
The line with the first *, I assume this means that if I have two web DBs, one being "web" and the other being "web2", then this means that the CM needs to use those names and CD1 needs to use "web" and CD2 needs to use "web2", yes"?
The last line with **: by "shared" does this mean that CD1 and CD2 would need to use the same web database, or does it just mean that as long as CM, CD1 and CD2 are set to use their respective web DBs to store the Link DB, the Link DB will be updated on publish? What database should the CM be configured to store it's like DB? It has two webs (web1, web2).
Here are details of our environment for context:
Our CM environment is 1 web server and 1 DB server. Our CD environment is two load balanced web servers, each with their own DB. So, two publishing targets for the CM to point to.
This is a good question. Typically you may have multiple web DBs for things such as pre production preview, e.g. "webpreview" as opposed to a public "web" DB. If you have two separate web DBs, "web1" and "web2" and two separate CDs use them respectively, then it seems you must have two separate publishing targets, web1 and web2. In the typical case (where "typical" maybe just means simple), there's a single web DB shared by 1-n CDs. So in your case CD1 and CD2 would both read from the same single web DB. Based on this context:
It means whatever connection string 'name' token you use on the CM for the "web" DB, you need to use the same token on CD1 and CD2. So it could be "web" or "webpublic" or similar. But must be consistent across all 3 instances (CM, CD1, CD2)
Yes, CD1 and CD2 would share the same exact web DB as I indicated above. And thus you would set the link database to use that shared "web" (or "webpublic"...) DB.
I'm rushing (never a good thing) to get Sync Framework up and running for a "offline support" deadline on my project. We have a SQL Express 2008 instance on our server and then will deploy SQLCE to the clients. Clients will only sync with server, no peer-to-peer.
So far I have the following working:
Server schema setup
Scope created and tested
Server provisioned
Client provisioned w/ table creation
I've been very impressed with the relative simplicity of all of this. Then I realized the following:
Schema created through client provisioning to SQLCE does not setup default values for uniqueidentifier types.
FK constraints are not created on client
Here is the code that is being used to create the client schema (pulled from an example I found somewhere online)
static void Provision()
{
SqlConnection serverConn = new SqlConnection(
"Data Source=xxxxx, xxxx; Database=xxxxxx; " +
"Integrated Security=False; Password=xxxxxx; User ID=xxxxx;");
// create a connection to the SyncCompactDB database
SqlCeConnection clientConn = new SqlCeConnection(
#"Data Source='C:\SyncSQLServerAndSQLCompact\xxxxx.sdf'");
// get the description of the scope from the SyncDB server database
DbSyncScopeDescription scopeDesc = SqlSyncDescriptionBuilder.GetDescriptionForScope(
ScopeNames.Main, serverConn);
// create CE provisioning object based on the scope
SqlCeSyncScopeProvisioning clientProvision = new SqlCeSyncScopeProvisioning(clientConn, scopeDesc);
clientProvision.SetCreateTableDefault(DbSyncCreationOption.CreateOrUseExisting);
// starts the provisioning process
clientProvision.Apply();
}
When Sync Framework creates the schema on the client I need to make the additional changes listed earlier (default values, constraints, etc.).
This is where I'm getting confused (and frustrated):
I came across a code example that shows a SqlCeClientSyncProvider that has a CreatingSchema event. This code example actually shows setting the RowGuid property on a column which is EXACTLY what I need to do. However, what is a SqlCeClientSyncProvider?! This whole time (4 days now) I've been working with SqlCeSyncProvider in my sync code. So there is a SqlCeSyncProvider and a SqlCeClientSyncProvider?
The documentation on MSDN is not very good in explaining what either of these.
I've further confused whether I should make schema changes at provision time or at sync time?
How would you all suggest that I make schema changes to the client CE schema during provisioning?
SqlCeSyncProvider and SqlCeClientSyncProvider are different.
The latter is what is commonly referred to as the offline provider and this is the provider used by the Local Database Cache project item in Visual Studio. This provider works with the DbServerSyncProvider and SyncAgent and is used in hub-spoke topologies.
The one you're using is referred to as a collaboration provider or peer-to-peer provider (which also works in a hub-spoke scenario). SqlCeSyncProvider works with SqlSyncProvider and SyncOrchestrator and has no corresponding Visual Studio tooling support.
both providers requires provisioning the participating databases.
The two types of providers provisions the sync objects required to track and apply changes differently. The SchemaCreated event applies to the offline provider only. This get's fired the first time a sync is initiated and when the framework detects that the client database has not been provisioned (create user tables and the corresponding sync framework objects).
the scope provisioning used by the other provider dont apply constraints other than the PK. so you will have to do a post-provisioning step to apply the defaults and constraints yourself outside of the framework.
While researching solutions without using SyncAgent I found that the following would also work (in addition to my commented solution above):
Provision the client and let the framework create the client [user] schema. Now you have your tables.
Deprovision - this removes the restrictions on editing the tables/columns
Make your changes (in my case setting up Is RowGuid on PK columns and adding FK constraints) - this actually required me to drop and add a column as you can't change the "Is RowGuid" property an existing columns
Provision again using DbSyncCreationOption.CreateOrUseExisting