Power BI dataset relationships grayed out - powerbi

I have created a Power BI dataset with the power BI REST API. There are 2 tables in this dataset.
Now, I'm creating a new report with data from this dataset (with Power BI Desktop).
The problem is that the "Manage relationships" command is grayed out. I have read somewhere that I should switch from a "connected live" dataset to an import dataset, but I'm not sure it applies in this case and I did not even find how to do it yet.
So, the question is: how can one enable the "Manage relationships" command for data coming from a Power BI dataset? Is it some flag I should set to a specific value when I create the dataset with the API? Or something to do in Power BI Desktop I've been unable to find up to now?

If you are using a dataset created using the REST API, then it must be added to the report by using Power BI Dataset data source. This means that this is a live connection, in which the modeling is done at the data source itself (think of this as connected to a SSAS cube). In this case you have pretty limited options what you can do in the report (creating measures is pretty much everything you can do).
You can't switch to Import in this case. For importing, you must load the data from the data source used to fill this data set and bypass it completely.
If you are missing a relationship between the tables in the dataset, you can define it when creating the dataset, with json body like this:
{
"name": "SalesData",
"defaultMode": "Push",
"tables": [
{
"name": "Customers",
"columns": [
{
"name": "CustomerId",
"dataType": "Int64"
},
{
"name": "CustomerName",
"dataType": "string"
}
],
"name": "Orders",
"columns": [
{
"name": "CustomerId",
"dataType": "Int64"
},
{
"name": "OrderDate",
"dataType": "Datetime"
},
{
"name": "Amount",
"dataType": "Double"
}
]
}
],
"relationships": [
{
"name": "FK_Orders_Customers",
"fromTable": "Orders",
"fromColumn": "CustomerId",
"toTable": "Customers",
"toColumn": "CustomerId",
"crossFilteringBehavior": "bothDirections"
}
]
}

Related

Amazon SP-API Listings API putListingsItem How To Update price and quantity? Node.js

I am using amazon-sp-api (JavaScript client for the Amazon Selling Partner API) but this is not limited to this client. All I want to do is use the Amazon SP-API Listings API's putListingsItem call to update the price and quantity of an item I have listed.
productType
According to the ListingsItemPutRequest docs, productType and attributes are required for this call.
Firstly, to obtain the correct productType value, you are supposed to search for a product definitions type using the Product Type Definitions API. So, I do that, and call searchDefinitionsProductTypes, just to discover my product has no matching product type.
Ultimately, I gave the value PRODUCT for productType field. Using PRODUCT, I made the getDefinitionsProductType call and got an object containing an array of propertyNames, shown below:
"propertyNames": [
"skip_offer",
"fulfillment_availability",
"map_policy",
"purchasable_offer",
"condition_type",
"condition_note",
"list_price",
"product_tax_code",
"merchant_release_date",
"merchant_shipping_group",
"max_order_quantity",
"gift_options",
"main_offer_image_locator",
"other_offer_image_locator_1",
"other_offer_image_locator_2",
"other_offer_image_locator_3",
"other_offer_image_locator_4",
"other_offer_image_locator_5"
]
},
On seeing this, I decide list_price and fulfillment_availability must be the price and quantity and then try using these in my code below.
attributes
The attributes value is also required. However, their current docs show no clear example of what to put for these values, which are where I must put price and quantity somewhere.
I found this link about patchListingsItem and tried to implement that below but got an error.
code:
// trying to update quantity... failed.
a.response = await a.sellingPartner.callAPI({
operation:'putListingsItem',
path:{
sellerId: process.env.SELLER_ID,
sku: `XXXXXXXXXXXX`
},
query: {
marketplaceIds: [ `ATVPDKIKX0DER` ]
},
body: {
"productType": `PRODUCT`
"requirements": "LISTING_OFFER_ONLY",
"attributes": {
"fulfillment_availability": {
"fulfillment_channel_code": "AMAZON_NA",
"quantity": 4,
"marketplace_id": "ATVPDKIKX0DER"
}
}
});
console.log( `a.response: `, a.response )
error:
{
"sku": "XXXXXXXXXXXX",
"status": "INVALID",
"submissionId": "34e1XXXXXXXXXXXXXXXXXXXX",
"issues": [
{
"code": "4000001",
"message": "The provided value for 'fulfillment_availability' is invalid.",
"severity": "ERROR",
"attributeName": "fulfillment_availability"
}
]
}
I also tried using list_price :
// list_price attempt... failed.
a.response = await a.sellingPartner.callAPI({
operation:'putListingsItem',
path:{
sellerId: process.env.SELLER_ID,
sku: `XXXXXXXXXXXX`
},
query: {
marketplaceIds: [ `ATVPDKIKX0DER` ]
},
body: {
"productType": `PRODUCT`
"requirements": "LISTING_OFFER_ONLY",
"attributes": {
"list_price": {
"Amount": 90,
"CurrencyCode": "USD"
}
});
console.log( `a.response: `, a.response )
Error (this time seems I got warmer... maybe?):
{
"sku": "XXXXXXXXXXXX",
"status": "INVALID",
"submissionId": "34e1XXXXXXXXXXXXXXXXXXXX",
"issues": [
{
"code": "4000001",
"message": "The provided value for 'list_price' is invalid.",
"severity": "ERROR",
"attributeName": "list_price"
}
]
}
How do you correctly specify the list_price or the quantity so this call will be successful?
Just tryin to update a single item's price and quantity.
The documentation for this side of things is terrible. I've managed to get some of it through a fair bit of trial and error though.
Fulfillment and Availability can be set with this block of JSON
"fulfillment_availability": [{
"fulfillment_channel_code": "DEFAULT",
"quantity": "9999",
"lead_time_to_ship_max_days": "5"
}]
and List price gets set, oddly, with this block. I'm still trying to find out how to set the List Price with Tax however.
"purchasable_offer": [{
"currency": "GBP",
"our_price": [{"schedule": [{"value_with_tax": 285.93}]}],
"marketplace_id": "A1F83G8C2ARO7P"
}]
Hope this helps you out :)

How can I retrieve the list of measures contained in a SSAS connected Power BI report?

I am working on a PBI report created by someone else in the organisation and I need to do some auditing of all the measures (50 or more) contained in the report itself.
The report connects to an on-premises instance of SQL Server Analysis Services.
I am trying to get the list of all measures contained in the report. To achieve that, in previous occasions, I used DAX Studio to connect to the running instance of the PBI Desktop as described in https://exceleratorbi.com.au/getting-started-dax-studio/ .
However, as this report connects to SSAS, when I try to connect DAX Studio to it, I get an error:
"No Databases were found when connecting to PBI Desktop. If your PBI file is using a Live Connection please connect directly to the source model instead."
Is there another known method I can use to extract all measures from the PBIX itself?
If you rename the .pbix file to .zip and open it as a zip file you will see a Report folder and then a file called Layout. If you copy that file out of the .zip file and open it in a text editor (preferably an app which can format JSON) you will see the following:
{
"id": 0,
"resourcePackages": [
//some packages here
],
"sections": [
//some sections here...
],
"config": "{\"version\":\"5.3\",\"themeCollection\":{\"baseTheme\":{\"name\":\"CY19SU06\",\"version\":\"5.5\",\"type\":2}},\"activeSectionIndex\":0,\"modelExtensions\":[{\"name\":\"extension\",\"entities\":[{\"name\":\"DimDate\",\"extends\":\"DimDate\",\"measures\":[{\"name\":\"My Report Measure\",\"dataType\":3,\"expression\":\"DIVIDE(99,100)\",\"errorMessage\":null,\"hidden\":false,\"formulaOverride\":null,\"formatInformation\":{\"formatString\":\"G\",\"format\":\"General\",\"thousandSeparator\":false,\"currencyFormat\":null,\"dateTimeCustomFormat\":null}}]},{\"name\":\"DimCustomer\",\"extends\":\"DimCustomer\",\"measures\":[{\"name\":\"My Report Measure 2\",\"dataType\":3,\"expression\":\"99 + 100\",\"errorMessage\":null,\"hidden\":false,\"formulaOverride\":null,\"formatInformation\":{\"formatString\":\"G\",\"format\":\"General\",\"thousandSeparator\":false,\"currencyFormat\":null,\"dateTimeCustomFormat\":null}}]}]}],\"defaultDrillFilterOtherVisuals\":true,\"settings\":{\"useStylableVisualContainerHeader\":true,\"exportDataMode\":1,\"useNewFilterPaneExperience\":true,\"allowChangeFilterTypes\":true},\"objects\":{\"section\":[{\"properties\":{\"verticalAlignment\":{\"expr\":{\"Literal\":{\"Value\":\"'Top'\"}}}}}]}}",
"layoutOptimization": 0
}
If you look in the config property there's a string which has JSON in it. If you pull out the JSON and format it you will get:
{
"version": "5.3",
"themeCollection": {
"baseTheme": {
"name": "CY19SU06",
"version": "5.5",
"type": 2
}
},
"activeSectionIndex": 0,
"modelExtensions": [
{
"name": "extension",
"entities": [
{
"name": "DimDate",
"extends": "DimDate",
"measures": [
{
"name": "My Report Measure",
"dataType": 3,
"expression": "DIVIDE(99,100)",
"errorMessage": null,
"hidden": false,
"formulaOverride": null,
"formatInformation": {
"formatString": "G",
"format": "General",
"thousandSeparator": false,
"currencyFormat": null,
"dateTimeCustomFormat": null
}
}
]
},
{
"name": "DimCustomer",
"extends": "DimCustomer",
"measures": [
{
"name": "My Report Measure 2",
"dataType": 3,
"expression": "99 + 100",
"errorMessage": null,
"hidden": false,
"formulaOverride": null,
"formatInformation": {
"formatString": "G",
"format": "General",
"thousandSeparator": false,
"currencyFormat": null,
"dateTimeCustomFormat": null
}
}
]
}
]
}
],
"defaultDrillFilterOtherVisuals": true,
"settings": {
"useStylableVisualContainerHeader": true,
"exportDataMode": 1,
"useNewFilterPaneExperience": true,
"allowChangeFilterTypes": true
},
"objects": { "section": [ { "properties": { "verticalAlignment": { "expr": { "Literal": { "Value": "'Top'" } } } } } ] }
}
You will see My Report Measure in the DimDate table which is expression DIVIDE(99,100) and My Report Measure 2 in the DimCustomer table which is expression 99 + 100. Those are simplistic examples but that gives you the idea.
Obviously this is all undocumented and subject to change. But that's the only way I'm aware of to get these measures which are added to the PBIX (rather than measure in the SSAS model itself).

DynamoDB LIKE '%' (contains) search over an array of objects using a key from the object, NodeJS

I am trying to use a "LIKE" search on DynamoDB where I have an array of objects using nodejs.
Looking through the documentation and other related posts I have seen this can be done using the CONTAINS parameter.
My question is - Can I run a scan or query over all of my items in DynamoDB where a value in my object is LIKE "Test 2".
Here is my DynamoDB Table
This is how it looks as JSON:
{
"items": [
{
"description": "Test 1 Description",
"id": "86f550e3-3dee-4fea-84e9-30df174f27ea",
"image": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/86f550e3-3dee-4fea-84e9-30df174f27ea.jpg",
"live": 1,
"status": "new",
"title": "Test 1 Title"
},
{
"description": "Test 2 Description",
"id": "e17dbb45-63da-4567-941c-bb7e31476f6a",
"image": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/e17dbb45-63da-4567-941c-bb7e31476f6a.jpg",
"live": 1,
"status": "new",
"title": "Test 2 Title"
},
{
"description": "Test 3 Description",
"id": "14ad228f-0939-4ed4-aa7b-66ceef862301",
"image": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/14ad228f-0939-4ed4-aa7b-66ceef862301.jpg",
"live": 1,
"status": "new",
"title": "Test 3 Title"
}
],
"userId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX"
}
I am trying to perform a scan / query which will look over ALL users (every row) and look at ALL items and return ALL instances where description is LIKE "Test 2".
I have tried variations of scans as per the below:
{
"TableName": "my-table",
"ConsistentRead": false,
"ExpressionAttributeNames": {
"#items": "items",
},
"FilterExpression": "contains (#items, :itemVal)",
"ExpressionAttributeValues": {
":itemVal":
{
"M": {
"description": {
"S": "Test 2 Description"
},
"id": {
"S": "e17dbb45-63da-4567-941c-bb7e31476f6a"
},
"image": {
"S": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX/e17dbb45-63da-4567-941c-bb7e31476f6a.jpg"
},
"live": {
"N": "1"
},
"status": {
"S": "new"
},
"title": {
"S": "Test 2 Title"
}
}
}
}
}
The above scan works but as you can see I am passing in the whole object as an ExpressionAttributeValues, what I want to do is just pass in the description for example something like the below (which doesnt work and returns no items found).
{
"TableName": "my-table",
"ConsistentRead": false,
"ExpressionAttributeNames": {
"#items": "items.description",
},
"FilterExpression": "contains (#items, :itemVal)",
"ExpressionAttributeValues": {
":itemVal":
{
"S": "Test 2"
}
}
}
Alternatively, would it be better to create a separate table where all the items are added and they are linked via the userId? I was always under the impression there should be one table per application but in this instance I think if I had all the item data at the top level, scanning it would be a lot safer and faster.
So with nearly 200 views since posting and no responses I have come up with a solution that does not immediately solve the initial problem (I honestly do not think it can be solved) but have come up with an alternative approach.
Firstly I do not want two tables as this seems overkill, and I do not want the aws costs associated with two tables.
This has lead me to restructure the primary keys with prefixes which I can search over using the "BEGINS_WITH" dynamodb selector query.
Users will be added as U_{USER_ID} and items will be added as I_{USER_ID}_{ITEM_ID}, this way I only have one table to manage and pay for and this allows me to run BEGINS_WITH "U_" to get a list of users or "I_" to get a list of items.
I will then flatten the item data as strings so I can run "contains" searches on any of the item data. This also allows me to run a "contains {USER_ID}" search on the primary keys for items so I can get a list of items for a particular user.
Hope this helps anyone who might come up against the same issue.

Custom Table Visual in Power BI

I am looking to make changes in the sampleBarChart visual to create a custom visual table.
I have come across this post and looked at several repos in Github to find how this can be done. I am able to bring the data in the dataView but could not figure out how to build visual out of it. Here is the capabilities.json:
{
"dataRoles": [{
"displayName": "Values",
"name": "Values",
"kind": "Grouping"
}],
"dataViewMappings": [{
"table": {
"rows": {
"select": [{"for": {"in": "Values"}}]
}
}
}]
}
The data is there but how to create a table visual in visual.ts?
Once the table is created and data is rendered in columns and rows, my objective would be to transpose it and create individual rows for each column.
to

Power BI Custom Visual Dataview grouping issue even though not summerized

I am having a problem , that dataView object having the unique values or rows in the table .
i have tried, giving the dataRoles of kind: powerbi.VisualDataRoleKind to Grouping,Measure and GroupingorMeasure. I have even tried out giving the dataViewMappings to categorical(dataReductionAlgorithm: { top: {} }) as well as values(select: [{ bind: { to: 'Y' } }]).I have tried by giving Do not summarize option,keep duplicates option, changed the type of the table to whole number ,text,decimal,etc .,but nothing worked for me. what iam missing and what i have to do to bind the entire table as it is in powerbi dev tool.
Below my code,
public static capabilities: VisualCapabilities = {
// This is what will appear in the 'Field Wells' in reports
dataRoles: [
{
displayName: 'Category',
name: 'Category',
kind: powerbi.VisualDataRoleKind.Grouping,
},
{
displayName: 'Y Axis',
name: 'Y',
kind: powerbi.VisualDataRoleKind.Measure,
},
],
// This tells power bi how to map your roles above into the dataview you will receive
dataViewMappings: [{
categorical: {
categories: {
for: { in: 'Category' },
dataReductionAlgorithm: { top: {} }
},
values: {
select: [{ bind: { to: 'Y' } }]
},
}
}],
// Objects light up the formatting pane
objects: {
general: {
displayName: data.createDisplayNameGetter('Visual_General'),
properties: {
formatString: {
type: { formatting: { formatString: true } },
},
},
},
}
};
Thanks in advance.
Power BI pretty much will always summarize in a categorical data view. You can try to work around it by asking for categorical values you think will be unique. but it's subject to your user's judgement.
Switching to a Table data view might be an option, I think you'll see do not summarize take effect there. It has it's own challenges, like identifying which field goes where, and the need to do the math yourself for aggregates.
You might submit an idea at https://ideas.powerbi.com with your desired scenario.
I know this post is old, but it took me forever to find the answer for this same problem. So I did end up using the table format, but it is still a little quirky. Let me give you my example and explain a little:
{
"dataRoles": [
{
"displayName": "Legend",
"name": "legend",
"kind": "GroupingOrMeasure"
},
{
"displayName": "Priority",
"name": "priority",
"kind": "GroupingOrMeasure"
},
{
"displayName": "SubPriority",
"name": "subpriority",
"kind": "GroupingOrMeasure"
}
],
"dataViewMappings": [
{
"table": {
"rows": {
"select": [{
"for": {
"in": "legend"
}
},
{
"for": {
"in": "priority"
}
},
{
"for": {
"in": "subpriority"
}
}
]
}
}
}
]
}
So I want my dataRoles to be GroupingOrMeasures, but I don't believe that is necessary here.
OK, so in the dataViewMappings, I have it marked as "I want my data in a table, in rows constisting of legend values, priority values, and subpriority values."
There are two quirky parts to this. First, your data will be sorted by default in the order in which you declare these things. So if you bring in your legend values first, that is how this table is sorted (by the matching order in which your first columns's values are). And second, it will only save unique rows to the table.
So if I had two rows of:
Legend Priority Subpriority
Canada Recyclables Plastic
Canada Recyclables Plastic
Then there would appear only one value in the table. Also, this means that if you were trying to get all rows of Legend, but only have the legend value selected for the table, you will be getting only one value of each, because repeated values will not make a unique row.
So if you have two rows of:
Canada
Canada
You would get only one row value with the entry of Canada.
And I would also caution you against incomplete data, namely null values. In my above example of Legend Priority Subpriority, if there are repeated values of "blank", only one row will show if all other fields match as well.
The only way to totally guarantee you will get back each individual row, no matter what, is to ensure that each row is unique. In my own work, I was going to just add a unique key column (primary key - indexing the rows - 1, 2, 3, etc.), but I found that priority and subpriority act as a combined unique key. If there are shared priorities, subpriorities are guaranteed to be different.
After knowing this and including these, I can add anything else I want and know that I will get all values back for each individual row.
To see the hierarchy of how to access the data from this point, after you drag in the appropriate values, just use the "Show DataView" tool under or above your visual (it is next to the reload / toggle autoreload icons).
This information was enough for my final solution, so I hope this answer helps others as well.