Copying Embedded H2 database data to another registry instance - wso2

I have a local Gov. registry 4.6.0 setup with standard out of box H2 embedded database where I added lots of artifact types and data to each custom artifact type.
I want to copy all data from my local H2 database to another Gov. registry 4.6.0 instance which is running on a server.
I do not want to repeat this on the registry instance running on the server and have to recreate the custom artifact types and have to re add the data that I have locally.
I could not find any documentation that allows me to do that.
Is there any way to do this?
Also how can I connect to and browse the embedded H2 database? Is there some script to run?
Thanks

There are actually two options,
H2 database can be found at /repository/database directory. You can simply copy this directory in to other server and replace it. To do that, 1st stop both server, then copy the directory and replace it with other server's directory. Then restart both.
Registry check-in check-out client. More detail are in docs
You can connect to H2 database and browser it. In this blog, It has been explained. You can go through it.

One way is to use the Check-in client that is shipped with WSO2 GReg. You can use the the Check-in client to take a dump of the registry (with -f option), and then check-in that dump to the new registry db.

Related

How to test claims-config.xml file WSO2 Identity Server

WSO2 documentation states that claims are read from the claim-config.xml file only once here: https://docs.wso2.com/display/IS570/Adding+Claim+Mapping
"The claims configured in <IS_HOME>/repository/conf/claim-config.xml file get
applied only when you start the product for the first time, or for any newly
created tenants. With the first startup, claim dialects and claims will be
loaded from the file and persisted in the database. Any consecutive updates to
the file will not be picked up and claim dialects and claims will be loaded
from the database."
The documentation makes it seem like you only have "one chance" to see how your claim-config.xml works. I'm in the process of developing and debugging the file though - is there a way to force WSO2 to read from the claim-config.xml file again or delete relevant data from the database to force claim-config.xml to be read?
I'd like to avoid completely uninstalling the product and reinstalling every time I want to observe a change I made to the claim-config.xml file.
Things I have tried:
Completely deleting the database files (WSO2CARBON_DB.h2.db) from \repository\database. This prevented the WSO2 server from starting up.
Deleting the entries from the IDN_CLAIM table from the H2 database. This started the server, but I wasn't able to login.
Completely deleting the database files (WSO2CARBON_DB.h2.db) from
\repository\database. This prevented the WSO2 server from starting up.
If you are okay with completely resetting databases, you can delete above files. As #senthalan wrote in comments, then you need to start the server with '-Dsetup' flag. It recreates DB, re-populates the configuration and starts the server.
sh wso2server.sh -Dsetup

WSO2 EMM mysql database setup

I am using WSO2 EMM 1.1.0. The documents talk about using a MySQL instead of H2 https://docs.wso2.com/display/EMM110/Setting+up+MySQL. It talks about editing the master-datasource.xml file and updating the WSO2_CARBON_DB, WSO2_EMM_DB and WSO2AM_DB databases. It then gives steps on priming those db's. But the master-datasource.xml file also contains the WSO2_IDENTITY_DB, SOCIAL_CACHE, SOCIAL_CASSANDRA_DB and JAGH2. I expect all of those can be moved to MySQL as well but I don't see the database scripts to set them up. What is the proper procedures to set up a system that uses MySQL instead of H2? Not to mention that the emm database had the database name hard coded into the setup script "USE WSO2EMM_DB" thus nullifying the master-datasource.xml file.
Thanks,
Brian
It is mentioned in this documentation[1] under the topic 'How to migrate from H2 to MySQL'
[1] - https://docs.wso2.com/display/EMM110/Upgrading+from+a+Previous+Release
You need to configure WSO2EMM_DB, WSO2AM_DB and WSO2CARBON_DB and WSO2IDENTITY_DB if you are going ahead with a larger deployment. H2 is setup just for make the out of the box experience better. You can create those DBs, Configure master_datasources.xml properly for all above DBs. And then run the server with the flag -Dsetup. It will get the configurations done automatically.
If it fails, you can also go to SERVER_HOME/dbscripts folder and find all the scripts for all above databases. Run them separately and run the server in the usual way which mentioned in our documentation.

WSO2 Business Activity Monitor cannot edit am_stats_analyzer

I am trying to fix the bug I am dealing with that is documented here https://wso2.org/jira/browse/APIMANAGER-2032
When I go into my BAM 2.4.1 admin console and go to "Home > Manage > Analytics > List" to try and make the change to my am_stats_analyzer script, I am unable to edit it (that option is not available).
Does anyone know another way to update this script so it no longer throws this exception?
Editing the hive scripts which was deployed by any toolboxes are not recommended; since during the restart of the server OR any redeployment of the same toolbox, will cause loss of your local changes. Therefore the edit option of the hive script that was deployed via a toolbox was removed; if you further need to do the changes in hive scripts via BAM management console, then you need to click on Copy New Script, and do you changes there and save it with another name.
If you want to do the modifications to the same script that was deployed by APIM toolbox, then you need do the changes to the toolbox it self. Extract the toolbox, and Go to analytics directory and edit the hive script what you interested on, and then again zip it, and rename with .tbox extension. Now redeploy your updated toolbox in BAM.

Connecting to neo4j using ColdFusion

Has anyone here successfully connected to neo4j using ColdFusion?
I was able to connect to neo4j 1.6.1 using this guide as a starting point: http://ghostednotes.com/2010/04/29/using-neo4j-graph-databases-with-coldfusion
. However, it was a short lived success. I have since uninstalled neo4j 1.6.1 and installed 1.7.
I am now running Apache, CF 9.0.1 on windows XP as a local dev box. I added ...\neo4j-community-1.7\lib to my CF class path and the libraries are listed in CF Server Java Class Path. neo4j is running fine, as I can use their administrator interface: http://localhost:7474/webadmin/# . CF and Apache are also running fine. I use them daily.
While the code below works, I'd really like to 'see' what's going on using the neo4j web admininistrator. So I can coordinate my learning neo4j while using the data in a CF application.
Code: (Works)
dbroot = "/tmp/neo4jtest1/";
graphDb = createObject('java', 'org.neo4j.kernel.EmbeddedGraphDatabase');
graphDb.init( dbroot & 'var/myFirstGraphDB');
So I tried to connect to the neo4j db graph.db . However the code fails.
Code: (fails)
graphDb = createObject('java', 'org.neo4j.kernel.EmbeddedGraphDatabase');
graphDb.init( dbroot & 'graph.db');
Error:
Object instantiation exception.
An exception occurred while instantiating a Java object. The class must not be an interface or an abstract class. Error: ''.
If I remove the "." in graph.db it does create a "graphdb" in the neo4j data folder, and successfully connects to it. However, that db is not viewable with their admin :(
I'm a novice, so please dumb down your answer.
Ok, I think what you're trying to achieve is not possible. It is not possible to access Neo4J within CF (via Java) and have the admin interface working (caveat 1 applies).
If you have put all the jars of the Neo4J package into Adobe CF then most likely the Neo4J admin interface is looking at it's own Neo4J file system. When you create the Embedded server it is not connecting to the same database because it simply can't.
Embedded Neo4J doesn't work like a standard database connection. One Embedded Neo4J reads and writes to one directory location (key word: directory, it doesn't open a single file but a whole bunch of them). No two Neo4J instances can access the same directory location (caveat 2 applies).
Ok, the caveats:
1- it is possible, in theory, to manually start up the admin interface programatically so that it uses the Embedded server that you create via Java. The Java code looks simple enough (taken from Using the server (including web administration) with an embedded database):
// Create your embedded graph db somewhere
src = CreateObject("java", "org.neo4j.server.WrappingNeoServerBootstrapper")
.init(graphDb);
srv.start();
// The server is now running
// until we stop it:
srv.stop();
I did not get this working, mostly because the admin server hasa bunch of dependencies that were incompatible with the rest of my setup, so I can't advise on how well the above will work.
2- it is possible to have 1 read/write Neo4J accessing one location and then have multiple read-only Neo4Js (EmbeddedReadOnlyGraphDatabase) reading the same location (but I've never tried it).
You do have the option of using the REST interface - either manually, or via the Neo4J Java REST Binding (kinda slow, though).
It might be worth reading the Deployment Scenarios documentation before getting too deep in this.
There is at least one CF/Neo4J bridge out there, but it's pretty incomplete. I have one that I worked on, but I need to figure out if I can open source it!
Just a small addition to otupman's comments. I can confirm his theory of connecting to the admin interface from CF. Adding the following jars to the CF class path seemed to be enough to get the basics up and running. You may need additional jars if you are using more advanced features. Note, I am using Tomcat so the exact jars may differ slightly for your environment
neo4j-community-1.7/lib/*.* (entire directory)
neo4j-community-1.7/system/lib: (ONLY the jars below)
asm-3.1.jar
asm-analysis-3.2.jar
asm-commons-3.2.jar
asm-tree-3.2.jar
asm-util-3.2.jar
commons-configuration-1.6.jar
jackson-core-asl-1.8.3.jar
jackson-jaxrs-1.8.3.jar
jackson-mapper-asl-1.8.3.jar
jersey-core-1.9.jar
jersey-multipart-1.9.jar
jersey-server-1.9.jar
jetty-6.1.25.jar
jetty-util-6.1.25.jar
neo4j-server-1.7-static-web.jar
neo4j-server-1.7.jar
rrd4j-2.0.7.jar
Then started the server and database in onApplicationStart
factory = createObject("java", "org.neo4j.graphdb.factory.GraphDatabaseFactory");
dbroot = ExpandPath("/neo4jtest/");
graphDb = factory.newEmbeddedDatabase(dbroot & 'myFirstGraphDB');
Bootstrapper = createObject("java", "org.neo4j.server.WrappingNeoServerBootstrapper");
graphServer = Bootstrapper.init( graphDb );
graphServer.start();
application.graphServer = graphServer;
application.graphDb = graphDB;
And closed both in onApplicationEnd
application.graphDb.shutDown();
application.graphServer.stop();
Edit: After some further testing, I think is better to load them once in OnServerStart. Then use a shutdown hook to close them. But since this is just for a local development box, it is less critical.

Backup strategy for django

I recently deployed a couple of web applications built using django (on webfaction).
These would be some of the first projects of this scale that i am working on, so I wanted to know what an effective backup strategy was for maintaining backups both on webfaction and an alternate location.
EDIT:
What i want to backup?
Database and user uploaded media. (my code is managed via git)
I'm not sure there is a one size fits all answer especially since you haven't said what you intend to backup. My usual MO:
Source code: use source control such as svn or git. This means that you will usually have: dev, deploy and repository backups for code (specially in a drsc).
Database: this also depends on usage, but usually:
Have a dump_database.py management command that will introspect settings and for each db will output the correct db dump command (taking into consideration the db type and also the database name).
Have a cron job on another server that connects through ssh to the application server, executes the dump db management command, tars the sql file with the db name + timestamp as the file name and uploads it to another server (amazon's s3 in my case).
Media file: e.g. user uploads. Keep a cron job on another server that can ssh into the application server and calls rsync to another server.
The thing to keep in mind though, it what is the intended purpose of the backup.
If it's accidental (be it disk failure, bug or sql injection) data loss or simply restoring, you can keep those cron jobs on the same server.
If you also want to be safe in case the server is compromised, you cannot keep the remote backup credentials (sshkeys, amazon secret etc) on the application server! Or else an attacker will gain access to the backup server.