Howto Sync similar tables with different names between two SQL databases? - microsoft-sync-framework

Unfortunately I need to keep the names of the tables different on the server and the client databases, however they are similar in structure. So for example my server has a table called "customers" while the client has the very same table with the name "clients" ?

You can provide the local Table Name and Remote table Name to the DbSyncAdapterconstructor. Something like this
DbSyncAdapter adapter = new DbSyncAdpater("localTableName", "remoteTableName");
then attach this Adapter to the Provider
DbSyncProvider provider = new DbSyncProvider(connection);
provider.Adapters.Add(adapter);
Please note that the context of Local and Remote table Names will change depending on how you set the Providers and Sync Direction in SyncOrchestrator.

Related

Having multiple databases in amazon neptune

In mysql we can create multiple databases and then we create different different tables in those database. e.g.
mysql> create database demo;
mysql> use demo;
mysql> create table test_demo (id int);
This allows us to create multiple tables under different different databases which provides virtual seggregation.
I am looking for similar stuff in amazon neptune. Is it possible to create different databases in amazon neptune and then to build the graph in those database which are independent from each other? If it is possible then how ?
Note: I don't want to create the separate cluster for my each graph hence above question.
At present, Neptune is a single-tenant database service. This means that a single Neptune cluster can only host a single logical database.
If you're looking to use a single cluster to host data for multiple contexts/users, you would need to do this within the application and use different aspects of the data model to denote these different contexts. For example, if you have a Person node label in your graph, you could use separate prefixes to denote which Person nodes relate to different users: User1.Person, User2.Person, ..., UserX.Person. Similar for edges and property keys.

How RedShift Sessions are handled from a Server Connection for TEMP tables

I'm using ColdFusion to connect to a RedShift database and I'm trying to understand how to test/assume myself of how the connections work in relation to TEMP tables in RedShift.
In my CFADMIN for the datasource I have unchecked Maintain connections across client requests. I would assume then each user who is using my website would have their own "Connection" to the DB? Is that correct?
Per the RedShift docs about temp tables:
TEMP: Keyword that creates a temporary table that is visible only within the current session. The table is automatically dropped at the end of the session in which it is created. The temporary table can have the same name as a permanent table. The temporary table is created in a separate, session-specific schema. (You cannot specify a name for this schema.) This temporary schema becomes the first schema in the search path, so the temporary table will take precedence over the permanent table unless you qualify the table name with the schema name to access the permanent table.
Am I to understand that if #1 is true and each user has their own connection to the database and thereby their own session then per #2 any tables that are created will be only in that session even though the "user" is the same as it's a connection from my server that is using the same credentials.
3.If my assumptions in #1 and #2 are correct then if I have ColdFusion code that runs a query like so:
drop if exists tablea
create temp table tablea
insert into tablea
select * from realtable inner join
drop tablea
And multiple users are using that same function that does this. They should never run into any conflicts where one table gets dropped as another request is trying to use it correct?
How do I test that this is the case? Besides throwing it into production and waiting for an error how can I know. I tried running a few windows side by side in different browsers and stuff and didn't notice an issue, but I don't know how to know if the temp tables truly are different between clients. (as they should be.) I imagine I could query some meta data but what meta data about the table would tell me that?
I have a similar situation, but with redbrick database software. I handle it by creating unique table names. The general idea is:
Create a table name something like this:
<cfset tablename = TableText & randrange(1, 100000)>
Try to create a table with that name. If you fail try again with a different name.
If you fail 3 times stop trying and mail the cfcatch information to someone.
I have all this code in a custom tag.
Edit starts here
Based on the comments, here is some more information about my situation. In CFAdmin, for the datasource being discussed, the Maintain Connections box is checked.
I put this code on a ColdFusion page:
<cfquery datasource="dw">
create temporary table dan (f1 int)
</cfquery>
I ran the page and then refreshed it. The page executed successfully the first time. When refreshed, I got this error.
Error Executing Database Query.
** ERROR ** (7501) Name defined by CREATE TEMPORARY TABLE already exists.
That's why I use unique tablenames. I don't cache the queries though. Ironically, my most frequent motivation for using temporary tables is because there are situations where they make things run faster than using the permanent tables.

How to modify a scope in Sync Framework?

I am new to using sync framework and need help in fixing an issue.
The system we built is a window based application. Each user will have their own database in their local. End of the day they sync their database to the remote DB server when they are within the network.
I added two new columns to an existing table. Scope definition seems to be updated in my local database. But when I try to do a sync with my remote DB server it says could not find _bulk-insert store procedure and errors out.
When checked in my remote DB server. I could see the new columns in the table and I don't see any of the store procedures. Scope-config table does not have the new columns in it.
Do the remote server needs to have the store procedure or updating the scope config table will do?
have you provisioned your remote DB server? if you're not finding the sync fx related objects then its not provisioned.
likewise, Sync Fx do not support schema synchronisation. there's nothing on the API to allow you to alter the scope definition either.
it's either you drop and re-create the scope and re-sync, or you hack your way into the Sync Fx scope definition metadata.

Cross-service references in DB

I am building service oriented system, with multiple services and application.
Current I am not sure how to handle DB references between resources from multiple services and databases.
For example, I have a users service, where I can define all users and their roles.
Next I have, products service, where I can define my products, their prices and other information.
I also have invoicing service, which is used to create invoices. This service will use information from previous two services. It will link products and users to invoice. Now I am not sure what is the best approach for this?
Do I just save product ID and user ID that it got from other two services, without any referential integrity?
If I do this, then I will have problem when generating reports, because at time of generation I will need to send a lot of requests to products service, to get names and prices of product in invoice. Same for users.
Do I create some table products in my invoicing application, and store name and price of product at the moment of invoice creation?
If I go with this approach, then in case that price or name of product changes, I will have inconsistent data across my applications?
Is there some well-known pattern for this kind of problem, that is what is the best solution.
Cross-service references in DB is a common challenge for Data integrity between multiple web services, And specially when we are talking about Real time access.
There is two approaches for your case :
1- Databases Replication across your servers
I suppose that you have each application hosted on a separate server, So i can name your servers as Users_server, Products_server and Invoices_Server.
In your example, your Invoice web service need to grab data from Users & Products Servers, in this case you can create a Replication of your Users Database and Products Database on your Invoices_server.
This way you can run your Join queries on the same server and get data from multiple databases.
Query example :
SELECT *
FROM UsersDB.User u
JOIN InvoicesDB.Invoice i ON u.Id = i.ClientId
2- Main Database Replication
1st step you have to replicate all your databases into one main server we can call it Base_server, which basically contain all your databases from all your services.
Then you can build an internal web service for your application to provide needed data in just "One Call", this answer your question about generating reports.
In other words, you will make one call to the mane Base service instead of making 2 or 3 calls to your separate services.
Note: As a Backend developer we use this organization as a best practice while building a large bundle based application, we create a base bundle and then create service_bundle which rely on the base bundle.
If your services are already live, we may need more details about the technology and databases type you using in order to give you a more accurate solution.
Just because you are using SOA doesn't mean you abandon database integrity. Continue to use referential integrity where your database design requires it.
At the service level, you can have each service be responsible for returning identity information for the entities which it owns. This identity information may or may not be the actual primary key from the database, but it will be used by the clients of the service as though it were the actual primary key.
When a client wants to create an invoice, it will call the User service and receive a User entity, which will contain a User Identifier. It will call the Product service and receive a set of products, each with a product identifier. It will then call the Invoice service to create an invoice, passing the user identifier and the product identifiers. This will likely return an invoice identifier.
You can (probably should) enforce the integrity making the productId and userId foreign keys in your invoice table. Then your DB makes sure the referenced entities exist. Reports should join tables, not query services for each item. I assume a central DB shared across the system.

how to pass database connection properties dynamically to connect to different database in pentaho

I'm using kettle transformation to store CSV files data to database.My client requirement is to store the same CSV files into different databases(eg:Oracle and postgres) dynamically.How to achieve this ?I have tried with kettle Job and set variables method.It doesn't worked for me. How to pass the database connection properties dynamically to the transformation as parameter to connect to different databases.Please help me out for this issue.
To connect to different databases of the SAME type, you can just set the relevant properties (host, port, database name, schema, username, password and whatever the connection requires).
However, if your database types change, you need to set up a Generic Database connection, where you need to provide the class of the JDBC driver, the full connection URL (including parameters), the username and password.
By changing those variables you can switch your target database.
Bear in mind that a variable cannot be set and used in the same transformation. Due to the parallel nature of transformation steps you need to set the variable values in a Transformation 1 then use them in a Transformation B and enclose both transformations inside a parent job. Best variable scope is "valid within root job".
actually, for different databases using multiple shared.xml files in multiple KETTLE_HOME locations may work just fine, I didn't have time to test thoroughly but I do use KETTLE_HOME and shared.xml for one-off runs, the databases are the same though, at lease as per connection.type.