SAP Business ByDesign Report - livecycle-designer

I am a beginner with SAP ByD. I have to create a report however I am having trouble with my data sources as I am not familiar with byD tables. Is it okay to create a layout design in Adobe Live Cycle and add the data source afterward?
I already have the layout how can I connect the layout to the data source?

Related

Power BI Embedded Approach for 100s of SQL Targets

I'm trying to find the best approach to delivering a BI solution to 400+ customers which each have their own database.
I've got PowerBI Embedded working using service principal licensing and I have the PowerBI service connected to my data through the On Premise Data Gateway.
I've build my first report pointing to 1 of the customer databases. Which works lovely.
What I want to do next, when embedding the report, is to tell PowerBI, for this session, to get the database from a different database.
I'm struggling to find somewhere where this is explained, or to understand if this is even possible.
I'm trying to avoid creating 400+ WorkSpaces or 400+ Data Sets.
If someone could point me in the right direction, it would be appreciated.
You can configure the report to use parameters and these parameters can be used to configure the source for your dataset:
https://www.phdata.io/blog/how-to-parameterize-data-sources-power-bi/
These parameters can be set by the app hosting the embedded report:
https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/update-parameters-in-group
Because the app is setting the parameter, each user will only see their own data. Since this will be a live connection, you would need to think about how the underlying server can support the workload.
An alternative solution would be to consolidate the customer databases into a single database (just the relevant tables) and use row level security to restrict access for each customer. The advantage to this design is that you take the burden off of the underlying SQL instance and push it into a PBI dataset that is made to handle huge datasets with sub-second response times.
More on that here: https://learn.microsoft.com/en-us/power-bi/enterprise/service-admin-rls

Possibility of Migration from Sisense to Microsoft Power BI

We are using Sisense for our reporting tool.
We have too many clients using Sisense.
This clients have a lot many dashboard , widget.
Sisense store data in mongo db.
I don't have an idea about Microsoft power BI.
Is there any possibility to build migrate tool for Sisense to Microsoft Power BI ?
Thank you.
For Sisense, it stores its meta-data for it in a mongo db instance. However for Power BI it stores it's meta data in the PBIX file. If you change the file extension from pbix to zip, you can navigate in inspect the contents.
When the report is deployed to the Power BI Service, it uses a number of components to store the file and meta data, blob storage and a small SQL instance in the background. You cannot access these items or the data in them.
For on premise versions of Power BI, Power BI Report Server (available in Premium only, or some enterprise licensing), this requires a SQL Server database to be used. This acts as a meta data store for the Power BI Front end and, also stores the files etc for the reports loaded to it. You can access this data meta store. More details on the setup here.
I don't think there is a path to migrate data from the mongo db to the sql, or the service, or the files, it will be a full recreate of the objects from one reporting technology to the other.
Actually, PowerBI uses XML and SiSense uses JAQL; parse the JAQL to create a translator to build rudimentary PowerBI reports. Since SiSense uses Elasticubes, Dashboards and Widgets, you have to parse them all to build out PowerBI. I successfully did this for SSRS and powerBI has a more complex layout but nonetheless, can be done...
Built it in .Net using Newtonsoft to extracts the JAQL (JSON) and then parse to translate to PowerBI..
not that hard

How to access single tenant Azure Analsysis Server with Power BI Embedded

So, currently I'm having difficulty understanding how Power BI Embedded can be setup so that each customer can access data from their own separate Azure Analysis Service, this is an App Owns Data situation. Analysis Services will be running in In-Memory mode and it will be accessed from Power BI via Live Connect.
Ideally I would like the Power BI Report to be ignorant of the data set/data source until the embedded report is provided with a parameter (e.g. connection string) which the report interprets so that it knows which server to connect to. So, ideally have: one Workspace, one Report, and zero (or a fake) Dataset.
The following is roughly what I'm looking to do (note the Red and Blue flow access a different server):
It looks like if I created both a Report and Dataset per customer I can achieve my goal but this seems like a poor approach since if the Report needs to be updated this involves updating, potentially, hundreds of reports. Also creating hundreds of Reports seems like unnecessary overhead when all Power BI needs to change for each request is the connection string pointing to the data source.
So is it possible to share the Workspace and Report across all customers but having completely separate data sources? Or is my approach in conflict with the way Power BI expects to function?
To date, I've tried using Query Parameters when configuring the data source in Power BI Desktop but I get the following error:
The connect live option for this file is disabled because it already contains data from another data source. You cannot explore live data and connect to another type of data source in the same file.
Please note,
Every report in Power BI can be connected to only one Dataset.
There is NO ability to dynamically change a connection string on the fly.
Currently, and in the foreseeable future, you'd have to clone the report & dataset per customer (or per connection setup) and modify the new dataset's connection string to match.
You can then dynamically choose which report to display based on your customer's needs.
Cloning a report can be done using:
POST https://api.powerbi.com/v1.0/myorg/reports/{report_id}/Clone
POST https://api.powerbi.com/v1.0/myorg/groups/{group_id}/reports/{report_id}/Clone
https://msdn.microsoft.com/en-us/library/mt784674.aspx
Changing the connection string would be done using:
POST https://api.powerbi.com/v1.0/myorg/datasets/{dataset_id}/Default.SetAllConnections
(similar API for groups)
https://msdn.microsoft.com/en-us/library/mt748181.aspx
using the C#.NET library provided by Power BI team, you'd use
Reports.CloneReport(string reportKey, CloneReportRequest requestParameters)
Datasets.SetAllDatasetConnections(string datasetKey, ConnectionDetails parameters)

Sharepoint 2010: best practice to migrate legacy data to sharepoint list

I have to migrate some legacy data from stand-alone sql server database to sharepoint list.
I'm going to use programmatic approach and write a code that communicates with sharepoint list asmx web service.
Are there some "data transformation wizards" to simplify such a task or a better approach to port legacy data from sql server database to sharepoint list?
Thank you in advance!
Being one time operation, I would not worrry about Best Practice but would consider what's the fastest way to do it.
You can use Excel 2010 (I have not tested it with Excel 2007) export data to Sharepoint 2010. Here are the high level steps:
Import data from SQL Server using DATA Tab in the ribbon
Excel would automatically create a TABLE
Now you can prepare the data for Export to Sharepoint. Here, you can remove unwanted columns, add new columns remove unwanted rows, arrange columns etc.
While being in the Table, access the "Export Table To Sharepoint List" functionality to publish you data to Sharepoint. More information about this is available at: http://office.microsoft.com/en-gb/excel-help/export-an-excel-table-to-a-sharepoint-list-HA010131472.aspx
It is quick! but let;s be aware of the limitations:
1. It cannot publish data to a list which already exists
2. It will not create a content type for the exported list. The columns are directly attached to the list.
If you want greater control over the migration, programming may be the way to go unless someone has a better idea in this great forum!

Problem regarding integration of various datasources

We have 4 datasources.2 datasources are internal and we can directly connect to the database.For the 3rd datasource we get a flat file (.csv) and have to pull in the data.4rth datasource is external and we cannot access it directly.
We need to pull data from all the 4 datasources, run business rules on them and store them in our database. We have a web application that runs on top of this database.Also every month we have to pull the data and do any updates/deletes/adds etc to existing data.
I am pretty much ignorant about this process.Also Can you please point some good books to study this topic.
These are the current approaches that i was thinking of.
To write an internal webservice that will talk to internal datasoureces and pull data.Create a handler to the external datasource using middleware (mqseries is already setup for this in some other existing project,planning to reuse that).PUll data from csv file again using Java.
On this data run some business rules from Java.Use this data.
This approach might run in my dev box, but not sure what all problems can occur in prod (specially due to synchronization)
Pull data from internal using plain java jdbc connection.For the remaining 2 get flat files, dump data using sql loader.All the data goes to temporary tables first.Run busines rules thru pl/sql and use.
Use some ELT tool like informatica to pull data.write business rules in perl (invoked by informatica)
Thanks.
A book like "The Data Warehouse ETL Toolkit" by Ralph Kimball is a good resource for learning techniques/architectures to bring data from different sources into one place.