I am working on a SSIS (2012) package that collects data from our till system to staging area and from staging area to CRM 2011 (on-premise | Roll up 11).
In CRM we have contact entity and order entity. These two entity are related via a guid called contactid(PK in contact) and customerid(FK in order).
when i insert new order in to CRM how do I ensure that the guid is created to associate that order to either a new contact or already existing contact?
I'm assuming since you're using SSIS your doing straight SQL inserts? If so, this is not supported. Ideally you'd be using the SDK, and in that case, you can set the GUID manually before actually creating the record, although the Contact Id still has to exist when creating the Order.
So you'll want to grab all of your existing Contacts up front, then determine for each order, if the contact exists or not. If it does, just set the customerId when you create the order and you're all set. If it doesn't, you'll need to create the Contact (potentially assigning it an Id), and then setting the customerId when you create the order.
I would echo what Daryl has said in that SQL inserts are not supported and generally a bad idea. However there is a solution, a company called Kingsway Soft make a SSIS component that allows you to read and write into CRM using the web services. The best part of it is that it is free if you don't want to run it using SQL agent. Even if you do want to schedule it the cost is very small for such an excellent product.
You can download it from here
http://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-crm
Related
In my job, I have to do a lot of backup and restore of NAV companies in order to create new companies similar to previous company. I am planning to build a .net application to do the job. Basically automate the repetitive stuff, but the problem is the Navision we use is 2009 R2 and I can't find a way to backup and restore a NAV database/company in 2009 R2 using .Net/SQL. Is there any way do this?
As said, there is no way to automate it using script. When performing backup/restore Nav doing many things aside from just create another table set. It creates keys/views, append records to system tables like Company (where list of companies is stored).
From your question I can't understand why do you need backup company to create similar one. Because after that you would have to clear all ledgers etc. Why copy data just to wipe it after all?
Alternative approach you could use to solve problem of creating new company fast is to create a codeunit in Nav that will populate newly created company with all data you need. Take a look at codeunit 2 Company-Initialize. When run it creates empty records in all important setup tables and fills report selection. You can modify it or create similar one that will fill setup tables by your default values or copy them from another company you provide as parameter (use changecompany for that).
Here is one more thing that I've found:
In earlier versions of Microsoft Dynamics NAV, you could create a
table by using the INSERT Function (Record) to add a record to table
2000000006, the Company table. In Microsoft Dynamics NAV 2013, it is
not supported to create a company by using the INSERT function. You
must create companies by using the New Company window in the
development environment.
That means that in your version you can even create new Company automaticaly from codeunit I've mentioned.
Also since Nav 2013 R2 there is new capabilities. You can use comandline parameters of finsql.exe to create company (or). And then invoke Nav codeunit from PowerShell script to populate it with data.
there is no way to backup a NAV Company using SQL.
You can backup the whole database only.
If you want to backup a separate company you need to use the built in backup using fbk files (Tools -> Backup)
From NAV 2015 you can backup\restore companies from the RoleTailored\Windows Client.
Cheers!
I have no experience with NAV.
I have to move Purchase Order data from one system, to NAV2009. I worked on the ETL part, and I have all the data to be moved to NAV ready. How do I import it to NAV?
This depends, of course, on where your data is stored and whether your license allows to write code / create new objects in NAV. But in any case, there are at least two tables that must be filled - "Purchase Header" and "Purchase Line". Probably, some other tables (like Document Dimension) might be required - depends on the data you need to transfer. It is not recommended to insert records directly into corresponding SQL Server tables, since there is a lot of C/AL code in NAV order tables that validates the data, so C/AL triggers must be executed.
This still leaves several options.
Write C/AL code that would read data from external source and insert records into purchase header and purchase line. This assumes that you have appropriate license permissions and some experience with NAV dev environment.
Create an XMLPort object if the data can be stored in XML format. Or a Dataport for csv-like files - this object is still available in 2009. Can be restricted by the license too. About NAV XMLPort objects and NAV Dataport objects
Publish purchase order pages as web services and insert records from a consuming application. Registering and consuming a web service in NAV 2009
I'm currently trying to implement Microsoft Sync Framework for field agents that will be working mostly disconnected from the server.
Currently I have a SQL Express database the application points to for offline mode and when they are back online, They can hit a sync button to push the changes up and down.
I have no problems creating the filtered scope, But our schema uses a "VersionID" column to handle historical data.
No data is deleted from the databases, so when a row is "updated" a new row is inserted with max(VersionID) + 1 as its new versionID.
Since I can't use aggregate functions in a filtered scope, I can't figure out how to retrieve the max version only for each unique row.
I only need to retrieve the max(versionID) record because of the 10GB limit for the database, I can't possibly download all records without going over the limit with all the support tables the application requires.
Any ideas?
the scope filter is simply appended to the _selectchanges SP's WHERE clause. If you can put your condition in a simple query, you should be able to set the same as the scope filter.
I am using Microsoft Synch Service Framework 4.0 for synching Sql server Database tables with SqlLite Database on the Ipad side.
Before making any Database schema changes in the Sql Server Database, We have to Deprovision the database tables. ALso after making the schema changes, we ReProvision the tables.
Now in this process, the tracking tables( i.e. the Synching information) gets deleted.
I want the tracking table information to be restored after Reprovisioning.
How can this be done? Is it possible to make DB changes without Deprovisioning.
e.g, the application is in Version 2.0, The synching is working fine. Now in the next version 3.0, i want to make some DB changes. SO, in the process of Deprovisioning-Provisioning, the tracking info. gets deleted. So all the tracking information from the previous version is lost. I do not want to loose the tracking info. How can i restore this tracking information from the previous version.
I believe we will have to write a custom code or trigger to store the tracking information before Deprovisioning. Could anyone suggest a suitable method OR provide some useful links regarding this issue.
the provisioning process should automatically populate the tracking table for you. you don't have to copy and reload them yourself.
now if you think the tracking table is where the framework stores what was previously synched, the answer is no.
the tracking table simply stores what was inserted/updated/deleted. it's used for change enumeration. the information on what was previously synched is stored in the scope_info table.
when you deprovision, you wipe out this sync metadata. when you synch, its like the two replicas has never synched before. thus you will encounter conflicts as the framework tries to apply rows that already exists on the destination.
you can find information here on how to "hack" the sync fx created objects to effect some types of schema changes.
Modifying Sync Framework Scope Definition – Part 1 – Introduction
Modifying Sync Framework Scope Definition – Part 2 – Workarounds
Modifying Sync Framework Scope Definition – Part 3 – Workarounds – Adding/Removing Columns
Modifying Sync Framework Scope Definition – Part 4 – Workarounds – Adding a Table to an existing scope
Lets say I have one table "User" that I want to synch.
A tracking table will be created "User_tracking" and some synch information will be present in it after synching.
WHen I make any DB changes, this Tracking table "User_tracking" will be deleted AND the tracking info. will be lost during the Deprovisioning- Provisioning process.
My workaround:
Before Deprovisioning, I will write a script to copy all the "User_tracking" data into another temporary table "User_tracking_1". so all the existing tracking info will be stored in "User_tracking_1". WHen I reprovision the table, a new trackin table "User_Tracking" will be created.
After Reprovisioning, I will copy the data from table "User_tracking_1" to "User_Tracking" and then delete the contents from table "User_Tracking_1".
UserTracking info will be restored.
Is this the right approach...
I have to migrate some legacy data from stand-alone sql server database to sharepoint list.
I'm going to use programmatic approach and write a code that communicates with sharepoint list asmx web service.
Are there some "data transformation wizards" to simplify such a task or a better approach to port legacy data from sql server database to sharepoint list?
Thank you in advance!
Being one time operation, I would not worrry about Best Practice but would consider what's the fastest way to do it.
You can use Excel 2010 (I have not tested it with Excel 2007) export data to Sharepoint 2010. Here are the high level steps:
Import data from SQL Server using DATA Tab in the ribbon
Excel would automatically create a TABLE
Now you can prepare the data for Export to Sharepoint. Here, you can remove unwanted columns, add new columns remove unwanted rows, arrange columns etc.
While being in the Table, access the "Export Table To Sharepoint List" functionality to publish you data to Sharepoint. More information about this is available at: http://office.microsoft.com/en-gb/excel-help/export-an-excel-table-to-a-sharepoint-list-HA010131472.aspx
It is quick! but let;s be aware of the limitations:
1. It cannot publish data to a list which already exists
2. It will not create a content type for the exported list. The columns are directly attached to the list.
If you want greater control over the migration, programming may be the way to go unless someone has a better idea in this great forum!