I'm currently trying to implement Microsoft Sync Framework for field agents that will be working mostly disconnected from the server.
Currently I have a SQL Express database the application points to for offline mode and when they are back online, They can hit a sync button to push the changes up and down.
I have no problems creating the filtered scope, But our schema uses a "VersionID" column to handle historical data.
No data is deleted from the databases, so when a row is "updated" a new row is inserted with max(VersionID) + 1 as its new versionID.
Since I can't use aggregate functions in a filtered scope, I can't figure out how to retrieve the max version only for each unique row.
I only need to retrieve the max(versionID) record because of the 10GB limit for the database, I can't possibly download all records without going over the limit with all the support tables the application requires.
Any ideas?
the scope filter is simply appended to the _selectchanges SP's WHERE clause. If you can put your condition in a simple query, you should be able to set the same as the scope filter.
Related
given Dynamo's pricing, the thought came to mind to use DynamoDB Local DB on an EC2 instance for the go-live of our startup SaaS solution. I've been trying to find like a data sheet for the local db, specifying limits as to # of tables, or records, or general size of the db file. Possibly, we could even run a few local db instances on dedicated EC2 servers as we know at login what user needs to be connected to what db.
Does anybody have any information on the local db limits or on this approach? Also, anybody knows of any legal/licensing issues with using dynamo-local in that way?
Every item in DynamoDB Local will end up as a row in the SQLite database file. So the limits are based on SQLite's limitations.
Maximum Number Of Rows In A Table = 2^64 but the database file limit will likely be reached first (140 terabytes).
Note: because of the above, the number of items you can store in DynamoDB Local will be smaller with the preview version of local with Streams support. This is because to support Streams the update records for items are also stored. E.g. if you are only doing inserts of these items then the item will effectively be stored twice: once in a table containing item data and once in a table containing the INSERT UpdateRecord data for that item (more records will also be generated if the item is being updated over time).
Be aware that DynamoDB Local was not designed for the same performance, availability, and durability as the production service.
I am working on a SSIS (2012) package that collects data from our till system to staging area and from staging area to CRM 2011 (on-premise | Roll up 11).
In CRM we have contact entity and order entity. These two entity are related via a guid called contactid(PK in contact) and customerid(FK in order).
when i insert new order in to CRM how do I ensure that the guid is created to associate that order to either a new contact or already existing contact?
I'm assuming since you're using SSIS your doing straight SQL inserts? If so, this is not supported. Ideally you'd be using the SDK, and in that case, you can set the GUID manually before actually creating the record, although the Contact Id still has to exist when creating the Order.
So you'll want to grab all of your existing Contacts up front, then determine for each order, if the contact exists or not. If it does, just set the customerId when you create the order and you're all set. If it doesn't, you'll need to create the Contact (potentially assigning it an Id), and then setting the customerId when you create the order.
I would echo what Daryl has said in that SQL inserts are not supported and generally a bad idea. However there is a solution, a company called Kingsway Soft make a SSIS component that allows you to read and write into CRM using the web services. The best part of it is that it is free if you don't want to run it using SQL agent. Even if you do want to schedule it the cost is very small for such an excellent product.
You can download it from here
http://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-crm
This is regarding Microsoft sync framework where we are syncing Ipad data with Sql server database.
It is working fine.
But here, I want to limit my records to be synched to only 20 records at a time. Right now, all the records are getting synched.
I there a out-of box feature by sync framework which will enable us to do so.
If not, how can I write a custom code to achieve this.
you should be able to set batching. but it's in terms of size, not number of rows. you can simply get your row size and multiply by the number of rows you have in mind.
lookup SetDownloadBatchSize and SetBatchSpoolDirectory in the documentation.
e.g. config.SetDownloadBatchSize = some value in Kb
I am using Microsoft Synch Service Framework 4.0 for synching Sql server Database tables with SqlLite Database on the Ipad side.
Before making any Database schema changes in the Sql Server Database, We have to Deprovision the database tables. ALso after making the schema changes, we ReProvision the tables.
Now in this process, the tracking tables( i.e. the Synching information) gets deleted.
I want the tracking table information to be restored after Reprovisioning.
How can this be done? Is it possible to make DB changes without Deprovisioning.
e.g, the application is in Version 2.0, The synching is working fine. Now in the next version 3.0, i want to make some DB changes. SO, in the process of Deprovisioning-Provisioning, the tracking info. gets deleted. So all the tracking information from the previous version is lost. I do not want to loose the tracking info. How can i restore this tracking information from the previous version.
I believe we will have to write a custom code or trigger to store the tracking information before Deprovisioning. Could anyone suggest a suitable method OR provide some useful links regarding this issue.
the provisioning process should automatically populate the tracking table for you. you don't have to copy and reload them yourself.
now if you think the tracking table is where the framework stores what was previously synched, the answer is no.
the tracking table simply stores what was inserted/updated/deleted. it's used for change enumeration. the information on what was previously synched is stored in the scope_info table.
when you deprovision, you wipe out this sync metadata. when you synch, its like the two replicas has never synched before. thus you will encounter conflicts as the framework tries to apply rows that already exists on the destination.
you can find information here on how to "hack" the sync fx created objects to effect some types of schema changes.
Modifying Sync Framework Scope Definition – Part 1 – Introduction
Modifying Sync Framework Scope Definition – Part 2 – Workarounds
Modifying Sync Framework Scope Definition – Part 3 – Workarounds – Adding/Removing Columns
Modifying Sync Framework Scope Definition – Part 4 – Workarounds – Adding a Table to an existing scope
Lets say I have one table "User" that I want to synch.
A tracking table will be created "User_tracking" and some synch information will be present in it after synching.
WHen I make any DB changes, this Tracking table "User_tracking" will be deleted AND the tracking info. will be lost during the Deprovisioning- Provisioning process.
My workaround:
Before Deprovisioning, I will write a script to copy all the "User_tracking" data into another temporary table "User_tracking_1". so all the existing tracking info will be stored in "User_tracking_1". WHen I reprovision the table, a new trackin table "User_Tracking" will be created.
After Reprovisioning, I will copy the data from table "User_tracking_1" to "User_Tracking" and then delete the contents from table "User_Tracking_1".
UserTracking info will be restored.
Is this the right approach...
I work with SQL Server database with ODBC, C++. I want to detect modifications in some tables of the database: another application inserts or updates rows and I have to detect all these modifications. It does not have to be the immediate trigger, it is acceptable to use polling to periodically check database tables for modifications.
Below is the way I think this can be done, and need your opinions whether this is the standard/right way of doing this, or any better approaches exist.
What I've thought of is this: I add triggers in SQL Server, which, on any modification, will insert the identifiers of modified/added rows into special table, which I will check periodically from my application. Suppose there are 3 tables: Customers, Products, Services. i will make three additional tables: Change_Customers, Change_Products, Change_Services, and will insert the identifiers of modified rows of the respective tables. Then I will read these Change_* tables from my application periodically and delete processed records.
Now if you agree that above solution is right, I have another question: Is it better to have separate Change_* tables for each of my tables I wish to monitor, or is it better to have one fat Changes table which will contain the changes from all tables.
Query Notifications is the technology designed to do exactly what you're describing. You can leverage Query Notifications from managed clients via the well known SqlDependency class, but there are native Ole DB and ODBC ways too. See Working with Query Notifications, the paragraphs about SSPROP_QP_NOTIFICATION_MSGTEXT (OleDB) and SQL_SOPT_SS_QUERYNOTIFICATION_MSGTEXT (ODBC). See The Mysterious Notification for an explanation how Query Notifications work.
This is the only polling-free solution that work with any kind of updates. Triggers and polling for changes has severe scalability and performance issues. Change Data Capture and Change Tracking are really covering a different topic (synchronizing datasets for occasionally connected devices, eg. Sync Framework).
Change Data Capture(CDC)--http://msdn.microsoft.com/en-us/library/cc645937.aspx
First you will need to enable CDC in database
::
USE db_name
GO
EXEC sys.sp_cdc_enable_db
GO
Enable CDC on table then
:: sys.sp_cdc_enable_table
Then you can query changes
If your version of Sql Server is 2005 - you may use Notification Services
If your Sql Server is 2008+ - there is most preferrable way to use triggers and log changes to log tables and periodically poll these tables from application to see the changes