Rename columns in dataset without breaking existing reports - powerbi

We have come to the situation of wanting to rename some columns in our PowerBI reports for clarity reasons and also replace some raw numbers with measures, so we can add logic to them. However renaming fields breaks the visuals in reports in the PowerBI service that have been created based on the dataset.
Fixing all the visuals by hand is absolutly not feasible for us, as we have hundereds of reports with dozens of visuals each, over multiple datasets. Is there any way to solve this, maybe edit the deployed reports programmatically somehow or are we just stuck with the field naming and layout we chose?
Thanks for any help!

We found a solution to this, while it might not be a straightforward process, it allows to programmatically alter reports and fix them in JSON format. The command line toolset pbi-tools allows to decompile and recompile reports like
pbi-tools extract reportfile.pbix
pbi-tools compile reportfolder
and create a series of editable JSON files. In our tests it was even possible to substitute a data model with this approach. Also this is useful for version control of reports.
Credit to AlexisOlson on the PowerBI forum.

Fixing all the visuals by hand is absolutly not feasible for us, as we have hundereds of reports with dozens of visuals each, over multiple datasets.
The only general solution to breaking changes in a shared Dataset is to introduce a new version, and keep both for a period of time.
For this specific change, you could introduce a new Perspective in the model (using Tabular Editor) which new reports could choose.
Another option if the Dataset is large, and you don't introduce any structural changes, is to have one model reference the other model using DirectQuery, which is currently a preview feature.

Related

Add custom javascript snippet/file to handle dynamic translations - load in Power BI Service

I've read multiple sources about including javascript in Power BI and how this is done. What I've found so far is, that this can be achieved through the Power BI Javascript API or through a Custom Visual (I'm rather new to Power BI so please correct me if I'm wrong).
However, I haven't found any source stating whether it's possible to alter other visuals using javascript implemented this way?
The goal is being able to dynamically translating headers of tables (no native way in Power BI of doing that yet except of duplicating data as far as I know; which we'd rather avoid).
So in short the questions is:
Is it possible to have any javascript implemented through either a Custom Visual or the Javascript API that can alter and translate the headers of a table on a given page in the report?
Your custom visual would not be able to affect other visuals in the way you're hoping for.
While it's not dynamic, you can always alias any fields or measures added to a PBI visual by double clicking the item in the field well and typing over the default name.

PowerBI Embedded - Dynamically create visuals

I need some clarification regarding the creation of Visualizations. I have a need to create 1 to N number of Visualizations. Is it possible to dynamically create visuals via code (or some other method) based on some sort of input (I have a need to produce a different number of charts dependant on which client I am viewing the chart from) or am I limited to dragging and dropping on Visualizations onto a report manually at design time?
Ideally I'd like to be able to run some sort of query and then create charts based off this result set. Is this possible?
If you're talking about custom visualizations...
Your best bet is probably to create a custom visualization that has all the options and features to adapt to varying input.
If you're talking about reports...
I don't think generating PowerBI reports with code is really (intended to be) easily done. I recommend trying either creating several different "smart" but hardcoded reports that adapt to the data, and choose between them dynamically, or switching to another technology to do this.

How much time will it take to develop MS Dynamics NAV integration using XML files

I got new assignment in my workplace to integrate NAV 4.0 system with third party system using XML files. Integration will be based on XML file creation and storage of those files in predefined directories (i.e. it has to be a non-web-service approach, but a simple file-based approach).
Integration will cover only items, i.e. item and related data will be exported. System must have an ability to fully setup, what fields and tables (together with items) should be exported, i.e. item, item unit of measure, item sales prices etc. (later there might be more tables and more fields added to the system, so user should be able to setup things without help of developer). I mean that system should be somewhat fields and tables invariant, but all tables will be item related.
The process (export) should be executed with NAS and also there should be manual redo function (in case NAS fails).
Those other systems after handling exported data XML will produce error files in other predefined directories. System should accept those error XMLs (error XMLs will be imported back) from those systems and show that to user.
I am really struggling to give a reasonable estimate for this assignment. Could somebody give me a good wild guess, how much time this would take for a reasonable developer to do?
I'd say you need to implement something very similar to Change Log but with different setup. When global OnModify trigger is fired you put a record in Integration log table. In this table you will also have field like Exported which will be set to true when Nas exported the record. This will allow manual redo and check if everything is working. You will also be able to link error from response xml file with certain record in integration table if you export primary key in outgoing xml file.
It would take about 3 to 4 weeks in my opinion.
it really depends of the xml files...for example if it's using namespaces then it's much harder. The scenario what you've mentioned in your post is suggests you'll need some additional setup tables as well. In version 4 there are no XMLport neither so you need to build them using Automation libraries.
I think we talking about 10-15 dev days + testing + documentation
Cheers!

Best way to build a crystal report with subreports

I have been tasked to create a crystal report that includes non linked subreports. It is meant to replicate the following. I am just having a hard time wrapping my mind around where to begin.
My application consumes a webservice which returns a list of objects for each web query made. I figured that since crystal reports tends to work natively with datasets, that I would create a custom dataset containing all the tables that the queries would involve.
Now that I have created a dataset and the data is loaded from the webservice I am consuming, I have come to a point where I am attempting to figure out how to query the dataset in such a way as to join columns from each datatable and build the report from that query.
Now can someone tell me whether there is an easier way to do this or have any suggestions as to what route they might take to accomplish this? The report needs to include subreports which complicates it a bit more.
I've found that it is cleaner and easier to maintain if you write a stored procedure in your database, and then just use that as your source in Crystal. If you have multiple sets of data to report, use multiple stored procedures. If you're going to have multiple subreports, it helps to have a common set of parameters for the procedures, although that is not required.
By getting your data using stored procedures, you can verify that you're getting the correct data before you write the report. Then Crystal is used mostly for formatting and totalling.

Tools and tips for switching CMS

I work for a university, and in the past year we finally broke away from our static HTML site of several thousand pages and moved to a Drupal site. This obviously entails massive amounts of data entry.
What if you're already using a CMS and are switching to another one that better suits your needs? How do you minimize the mountain of data entry during such a huge change? Are there tools built for this, or some best practices one should follow?
The Migrate module for Drupal would provide a big help. The Economist.com data migration to Drupal will give you an overview of the process.
The video from the Migration: not just for the birds presentation at Drupalcon DC 2009 is probably somewhat out-of-date, but also gives a good introduction.
Expect to have to both pre-process and post-process your data manually, whatever happens. Accept early on that your data is likely to be in a worse state than you think it is: fields will be misused; record-to-record references (foreign keys) might not be implemented properly, or at all; content is likely to need weeding and occasionally to be just bad or incorrect.
Check your database encoding. Older databases won't be in Unicode encodings, and get grumpy if you have to export data dumps and import them elsewhere. Even then, assume that there'll be some wacky nonprintable characters in your data: programs like Word seem to somehow inject them everywhere, and I've seen... codepoints... you people wouldn't believe. Consider sweeping your data before you even start (or even sweeping a database dump) for these characters. Decide whether or not to junk them or try to convert them in the case of e.g. Word "smart" punctuation characters.
It's very difficult to create explicit data structures from implied one. If your incoming data has a separate date field, you can map that to a date field; if it has a date as part of a big lump of HTML, even if that date is in a tag with an id attribute, simple scripting won't work. You could use offline scripting with BeautifulSoup or (if your HTML's a bit nicer) the faster lxml to pre-process your data set, extract those implicit fields, and save them into an implicit format. Consider creating an intermediate database where these revisions are going to go.
The Migrate module is excellent, but to get really good data fidelity and play more clever tricks you might need to learn about its hook system (Drupal's terminology for functions following a particular naming scheme) and the basics of writing a module to put these hooks in (a module is broadly just a PHP file where all the functions begin with the same text, the name of the module file.)
All imported content should be flagged for at least a cursory check. You can do this by importing it with status=0 i.e. unpublished, and then create a view with the Views module to go through the content and open it in other tabs for checking. Views Bulk Operations lets you have a set of checkboxes alongside your view items, so you could approve many nodes at once.
Expect to run and re-run and re-run the import, fixing new things every time. Check ten, or twenty items, as early as possible. If there are any problems, check ten or twenty more. Fix and repeat the import.
Gauge how long a single import run is likely to take. Be pessimistic: we had an import we expected to take ten hours encounter exponential slowdown when we introduced the full data set; until we finally fixed some slow queries, it was projected to take two weeks.
If in doubt, or if you think the technical aspects of the above are just going to take more time than the work itself, then just hire temps to do the data. But you still need decent quality controls, as early as possible during their work. Drupal developers are also for hire: try your country's relevant IRC channel, or post a note in a relevant groups.drupal.org group. They're more expensive than temps but they usually write better PHP...! Consider hiring an agency too: that's a shameless plug, as I work for one, but sometimes it's best to get experts in for these specific jobs.
Really good imports are always hard, harder than you expect. Don't let it get you down!
Migrate + table wizard (and schema + views) is the way to go. With table wizard you can expose any table to drupal and map fields accordingly using migrate.
Look here for a detailed walktrough:
http://www.lullabot.com/articles/drupal-data-imports-migrate-and-table-wizard
You'll want to have an access to existing data from django. This helps me a lot with migrating: http://docs.djangoproject.com/en/1.2/howto/legacy-databases/ . With correct model definitions you'll have full django power including the admin. In fact, I'm using django just as admin backend for several legacy php projects - django's admin can easily outachieve a lot of custom hand-written admin scripts.
Authorization should remain the same. Users should be able to login with their credentials but it is hard to write a migration script for auth data because password hashing schemas may be different and there is no way to convert between them without knowing plain passwords. Django provides a way to support different sources of auth so you can write Drupal auth backend: http://docs.djangoproject.com/en/1.2/topics/auth/#writing-an-authentication-backend
There is no need to do the full rewrite. If some parts are working fine they can still be powered by Drupal. New code can written using Django with same UI. Routing between old and new parts can be performed by web server url rewriting. Both django and drupal parts can be powered by the same DB.