I have no experience with NAV.
I have to move Purchase Order data from one system, to NAV2009. I worked on the ETL part, and I have all the data to be moved to NAV ready. How do I import it to NAV?
This depends, of course, on where your data is stored and whether your license allows to write code / create new objects in NAV. But in any case, there are at least two tables that must be filled - "Purchase Header" and "Purchase Line". Probably, some other tables (like Document Dimension) might be required - depends on the data you need to transfer. It is not recommended to insert records directly into corresponding SQL Server tables, since there is a lot of C/AL code in NAV order tables that validates the data, so C/AL triggers must be executed.
This still leaves several options.
Write C/AL code that would read data from external source and insert records into purchase header and purchase line. This assumes that you have appropriate license permissions and some experience with NAV dev environment.
Create an XMLPort object if the data can be stored in XML format. Or a Dataport for csv-like files - this object is still available in 2009. Can be restricted by the license too. About NAV XMLPort objects and NAV Dataport objects
Publish purchase order pages as web services and insert records from a consuming application. Registering and consuming a web service in NAV 2009
Related
Is there anything (system procedure,function or other) in SQL Server that will provide the functionality of DBMS_ALERT package of ORACLE (and DBMS_PIPE respectively)?
I work in a plant and I'm using an extension-product of SQL-Server called InSQL Server by Wonderware which is specialized in gothering data from plant controllers and HumanMachineInterface(SCADA) software.
This system can record events happening in the plant (like a high-temperature alarm, for example). It stores sensor values in extension tables of SQL Sever, and other less dense information in normal SQL Server tables.
I want to be able to alert some applications running on operator PCs that an event has been recorded in the database.
An after insert trigger in the events table seems to be a good place to put something equivalent to DBMS_ALERT (if it exists), to wake up other applications that are waiting for the specific alert and have the operators type in some data.
In other words - I want to be able to notify other processes (that have connection to SQL Server) that something has happened in the database.
All Wonderware (InSQL but now called Aveva) Historian data is stored in the history blocks EXCEPT for the actual tag storage configuration and dedicated event data. The time series data for analog, discrete and strings is NOT in SQL tables at all - unless someone is doing custom configuration to. create tables of their own.
Where are you wanting these notifications to come up? Even though the historical data is NOT stored in SQL tables, Wonderware has extensive documentation on how to use SQL queries to appropriately retrieve data (check for whatever condition you are looking for)
You can easily build a stored procedure and configure it for a maintenance plan.
But are you just trying to alarm (provide notification) on the scada itself?
Or are you truly utilizing historical data (looking for a data trend - average, etc.)?
Or trying to send the notification to non-scada interfaces?
Depending on your specific answer, the scada itself should probably be able to do it.
But there is software that already does this type of thing Win-911, SeQent, Scadatec are a couple in the OT space. But also things like Hip Link or even DeskAlert which can connect to any SQL via it's own API.
So where does the info need to go (email, text, phone, desktop app...) and what is the real source of the data>
I am looking for an answer regarding report data storage concept in Power BI.
I have published 3 reports to Power BI service (cloud):
Report1 with Excel source
Report2 with onpremise Sql server source
Report3 with azure sql source
Around 200 users in my organization will be accessing these reports. I want to understand whether:
The first time a particular report is accessed, will the data be fetched from the source and shown in the report or will it be stored to some cloud location from where the data will go to the report?
Suppose a user opens a report that was already viewed by another user, then will the data be fetched from the source again or is there any concept of cross user shared cache?
Suppose a user opens the report for the 2nd time (example: after having already accessed it, suppose user refreshes the web page), will the data will be fetched again? Or is there any concept of shared cache?
Does the answer to any of the above change if I had used the Power BI reporting server (onpremise) and deployed the report on the PBRS?
With the service, you typically upload a PBIX, which contains the report pages and all of the underlying data. Unless you set up a data gateway to accommodate DirectQueries and/or scheduled refreshes, the cloud service does not access your original data sources at all. With a scheduled refresh, it only accesses the original data during the refresh. A DirectQuery connection does access a server "live" but has many limitations.
The data is fetched when you load it into your Power BI desktop application and then loaded into the cloud when you publish the report to a workspace. Once it's there, the data shown to the user is fetched from the cloud copy, not the original data source.
Same answer as above regarding where the data is fetched from (the cloud copy). I don't believe there is shared cache between users but rather each user has some temporary caching individually. This type of caching saves the calculation results (computed on the underlying data) that are needed to populate the report visuals.
There is some caching done temporarily so that if a user switches among slicer combinations to one previously chosen you may see much quicker loading than when selecting a new configuration since it cached the results and doesn't need to recompute them. As far as I understand, this kind of caching is short-lived and not shared among users. Remember, this type of cache is not the same as the underlying data in the cloud copy of the PBIX.
I've not used an on-premise server, but I would expect the behavior to be similar with the exception that the service is on the local server instead of a could server somewhere else.
The upshot is that traffic in the service is separated from the requests to the original source data (assuming no DirectQuery connections). Those original sources are only accessed during data refreshes, which are independent of end-user actions (under the same assumption).
I have developed a Power BI report using Power BI Desktop, pointing to a private on premise development database as the datasource so that I was able to develop and test it easily. Then, I published it from my Power BI Desktop pbix to the work area of my customer.
As a result, the work area contains the published report and the dataset. Later, my customer has changed the dataset so that it now points to the correct on premise production database of their own. It works perfectly.
Now, I want to publish a new report for my customer using the previously published and reconfigured dataset. The problem is that I can't see any option in Power BI Desktop to have the report point to the published dataset, nor I can't see any option to avoid creating a new dataset each time I publish a report, nor any way to reconfigure from the web portal the new published report to point to the same dataset as the first one.
Is there any way to do this or any work around for this scenario? I think the most reasonable solution would be to be able to change the dataset of any report, so that the datasets of any report could be interchangeable.
Update:
I had already used connection specific parameters, but I'm not given rights to change the published dataset, so thats a dead end.
Another thing I have come up to is that in Power BI Desktop you cannot change the connection parameters values to those of production enviroment and publish the report if you can't access the target database from your computer, because PowerBI Desktop ask you to apply changes first, and when it tries to apply the values it tries to connect to the corresponding database and, obviously, ends with a network related error or timeout error trying to connect to the database server, therefore cancelling changes and returning to the starting point.
It's always a good practice to use connection specific parameters to define the data source. This means that you do not enter server name directly, but specify it indirectly using a parameter. The same for the database name, if applicable.
If you are about to make a new report, cancel Get data dialog, define parameters as described bellow, and then in Get data specify the datasource using these parameters:
To modify an existing report, open Power Query Editor by clicking Edit Queries and in Manage Parameters define two new text parameters, lets name them ServerName and DatabaseName:
Set their current values to point to one of your data sources, e.g. SQLSERVER2016 and AdventureWorks2016. Then right click your query in the report and open Advanced Editor. Find the server name and database name in the M code:
and replace them with the parameters defined above, so the M code will look like this:
Now you can close and apply changes and your report should work as before. But now when you want to change the data source, do it using Edit Parameters:
and change the server and/or database name to point to the other data source, that you want to use for your report:
After changing parameter values, Power BI Desktop will ask you to apply the changes and reload the data from the new data source. To change the parameter values (i.e. the data source) of a report published in Power BI Service, go to dataset's settings and enter new server and/or database name:
If the server is on-premise, check the Gateway connection too, to make sure that it is configured properly to use the right gateway. You may also want to check the available gateways in Manage gateways:
After changing the data source, refresh your dataset to get the data from the new data source. With Power BI Pro account you can do this 8 times per 24 hours, while if the dataset is in a dedicated capacity, this limit is raised to 48 times per 24 hours.
This is a easy way to make your reports "switchable", e.g. for switching one report from DEV or QA to PROD environment, or as part of your disaster recovery plan, to automate switching all reports in some workgroup to another DR server. In your case, this will allow you (or your customers) to easily switch the datasource of the report.
I think the only correct answer is that it cannot be done, at least at this moment.
The most closest way of achieving this is with Live connections:
https://learn.microsoft.com/en-us/power-bi/desktop-report-lifecycle-datasets
But if you have already designed your report without using the Live connection but your own development enviroment and corresponding connection parameters then you are lost, your only chance is redo all your report with the Live Connection, or the queerest one solution, to use an alias in your configuration matching the name of the database server and the same database name that in the target production environment.
In my job, I have to do a lot of backup and restore of NAV companies in order to create new companies similar to previous company. I am planning to build a .net application to do the job. Basically automate the repetitive stuff, but the problem is the Navision we use is 2009 R2 and I can't find a way to backup and restore a NAV database/company in 2009 R2 using .Net/SQL. Is there any way do this?
As said, there is no way to automate it using script. When performing backup/restore Nav doing many things aside from just create another table set. It creates keys/views, append records to system tables like Company (where list of companies is stored).
From your question I can't understand why do you need backup company to create similar one. Because after that you would have to clear all ledgers etc. Why copy data just to wipe it after all?
Alternative approach you could use to solve problem of creating new company fast is to create a codeunit in Nav that will populate newly created company with all data you need. Take a look at codeunit 2 Company-Initialize. When run it creates empty records in all important setup tables and fills report selection. You can modify it or create similar one that will fill setup tables by your default values or copy them from another company you provide as parameter (use changecompany for that).
Here is one more thing that I've found:
In earlier versions of Microsoft Dynamics NAV, you could create a
table by using the INSERT Function (Record) to add a record to table
2000000006, the Company table. In Microsoft Dynamics NAV 2013, it is
not supported to create a company by using the INSERT function. You
must create companies by using the New Company window in the
development environment.
That means that in your version you can even create new Company automaticaly from codeunit I've mentioned.
Also since Nav 2013 R2 there is new capabilities. You can use comandline parameters of finsql.exe to create company (or). And then invoke Nav codeunit from PowerShell script to populate it with data.
there is no way to backup a NAV Company using SQL.
You can backup the whole database only.
If you want to backup a separate company you need to use the built in backup using fbk files (Tools -> Backup)
From NAV 2015 you can backup\restore companies from the RoleTailored\Windows Client.
Cheers!
I am working on a SSIS (2012) package that collects data from our till system to staging area and from staging area to CRM 2011 (on-premise | Roll up 11).
In CRM we have contact entity and order entity. These two entity are related via a guid called contactid(PK in contact) and customerid(FK in order).
when i insert new order in to CRM how do I ensure that the guid is created to associate that order to either a new contact or already existing contact?
I'm assuming since you're using SSIS your doing straight SQL inserts? If so, this is not supported. Ideally you'd be using the SDK, and in that case, you can set the GUID manually before actually creating the record, although the Contact Id still has to exist when creating the Order.
So you'll want to grab all of your existing Contacts up front, then determine for each order, if the contact exists or not. If it does, just set the customerId when you create the order and you're all set. If it doesn't, you'll need to create the Contact (potentially assigning it an Id), and then setting the customerId when you create the order.
I would echo what Daryl has said in that SQL inserts are not supported and generally a bad idea. However there is a solution, a company called Kingsway Soft make a SSIS component that allows you to read and write into CRM using the web services. The best part of it is that it is free if you don't want to run it using SQL agent. Even if you do want to schedule it the cost is very small for such an excellent product.
You can download it from here
http://www.kingswaysoft.com/products/ssis-integration-toolkit-for-microsoft-dynamics-crm