I get a SQL INSERT error when trying to use the Patch function in PowerApps on a table, which has a foreign key that relies on the primary key of a second table that hasn't been Patched yet.
Which makes sense. How could I be allowed Patch a table with a blank dependency? So how can it be done?
Here are the FK/PK dependencies of all 5 tables:
So far I've tried:
Allow NULL on the FK column
Removed CASCADE from FK UPDATE and DELETE
Any further ideas? I specifically need example Functions.
Thank you
The Patch function will return the updated (or inserted) object with any fields from the server filled out, so you can use store it and use later to retrieve the server-generated id. Using Last will work for most of the time, but may fail if you have two users in the app at the same time, or if the table starts getting too big (and not all of it will be cached locally at once).
Set(
patchResult,
Patch(
'[dbo].[dateTable]',
Defaults('[dbo].[dateTable]'),
{
siteId: varSiteID,
readingDate: Now()
}));
//Patch values into readingTable
Patch(
'[dbo].[readingTable]',
Defaults('[dbo].[readingTable]'),
{
dateId: patchResult.dateId,
unitNum: 1,
xzyName: 1,
avgJJk: 1,
prevLLk: 1,
readingNotes: "This is awesome"
}
);
Just figured this one out:
You have to Patch these in order so that the PK's are Patched first, then grabbed via the Last() function and inserted (as FK's) into following Patches.
Hope this helps someone else out.
Example:
//Patch values into dateTable
Patch('[dbo].[dateTable]',
Defaults('[dbo].[dateTable]'),
{
siteId: varSiteID,
readingDate: Now()
}
);
//Patch values into readingTable
Patch('[dbo].[readingTable]',
Defaults(
'[dbo].[readingTable]'),
{
dateId: Last('[dbo].[dateTable]').dateId, <--BINGO
unitNum: 1,
xzyName: 1,
zyxNum: 1,
xkdFactor: 1,
supplyXya: 1,
supplyUio: 1,
sortNum: 1,
currentUys: 1,
avgJJk: 1,
prevLLk: 1,
readingNotes: "This is awesome"
}
);
//Patch values into the imageTable
ForAll(
colImageGallery,
Patch(
'[dbo].[imageTable]',
Defaults('[dbo].[imageTable]'),
{
readingId: Last('[dbo].[readingTable]').readingId, <--BINGO
photo: image,
photoNotes: " "
}
)
);
Related
I am trying to add conditional formatting into my table and although it works for some of the data, about half of it is not returning any value for the condition.
Condition = MAXX('2020 PNRs',
IF('2020 PNRs'[2020-2021] <=1, 1,
IF('2020 PNRs'[2020-2021] <=10000 , 2,
IF('2020 PNRs'[2020-2021] >10000 , 3))))
2020-2021 is a calculated measure
2020-2021 =
SUM('TTV'[2020TTV]) + SUM('TTV'[2021TTV])
but it is quite simple so I am not sure if that has anything to do with why it's not working?
Your Condition formula filters '2020 PNRs' table. It might be that the top part of your screenshot presents results that are not available on '2020 PNRs', but are rather sourced from some other table (e.g. '2021 PNRs'?). It would be good for us to understand the source of your table and your data model.
In the meantime, I suggest testing the following measure:
Condition =
SWITCH(
TRUE(),
[2020-2021], 1,
[2020-2021] <=10000, 2,
[2020-2021] >10000, 3
)
If this still doesn't work, then return the value of [2020-2021] measure as the last argument to understand the issue:
Condition =
SWITCH(
TRUE(),
[2020-2021], 1,
[2020-2021] <=10000, 2,
[2020-2021] >10000, 3,
[2020-2021]
)
Also asked this on the PowerBI forum.
I am trying to change sampleBarChart PowerBI visual to use a "table" data binding instead of current "categorical". First goal is to build a simple table visual, with inputs "X", "Y" and "Value".
Both data bindings are described on the official wiki. This is all I could find:
I cannot find any example visuals which use it and are based on the new API.
From the image above, a table object has "rows", "columns", "totals" and "identities". So it looks like rows and columns are my x/y indexes, and totals are my values?
This is what I tried. (Naming is slightly off as most of it came from existing barchart code)
Data roles:
{ "displayName": "Category1 Data",
"name": "category1",
"kind": 0},
{ "displayName": "Category2 Data",
"name": "category2",
"kind": 0},
{ "displayName": "Measure Data",
"name": "measure",
"kind": 1}
Data view mapping:
"table": {
"rows": {"for": {"in": "category1"}},
"columns": {"for": {"in": "category2"}},
"totals": {"select": [{"bind": {"to": "measure"}}]}
}
Data Point class:
interface BarChartDataPoint {
value: number;
category1: number;
category2: number;
color: string;
};
Relevant parts of my visualTransform():
...
let category1 = categorical.rows;
let category2 = categorical.columns;
let dataValue = categorical.totals;
...
for (let i = 1, len = category1.length; i <= len; i++) {
for (let j = 1, jlen = category2.length; j <= jlen; j++) {
{
barChartDataPoints.push({
category1: i,
category2: j,
value: dataValue[i,j],
color: "#555555"//for now
});
}
...
Test data looks like this:
__1_2_3_
1|4 4 3
2|4 5 5
3|3 6 7 (total = 41)
The code above fills barChartDataPoints with just six data points:
(1; 1; 41),
(1; 2; undefined),
(2; 1; 41),
(2; 2; undefined),
(3; 1; 41),
(3; 2; undefined).
Accessing zero indeces results in nulls.
Q: Is totals not the right measure to access value at (x;y)? What am I doing wrong?
Any help or direction is very appreciated.
User #RichardL shared this link on the PowerBI forum. Which helped quite a lot.
"Totals" is not the right measure to access value at (x;y).
It turns out Columns contain column names, and Rows contain value arrays which correspond to those columns.
From the link above, this is how table structure looks like:
{
"columns":[
{"displayName": "Year"},
{"displayName": "Country"},
{"displayName": "Cost"}
],
"rows":[
[2014, "Japan", 25],
[2015, "Japan", 30],
[2016, "Japan", 18],
[2015, "North America", 14],
[2016, "North America", 30],
[2016, "China", 100]
]
}
You can also view the data as your visual receives it by placing this
window.alert(JSON.stringify(options.dataViews))
In your update() method. Or write it in html contents of your visual.
This was very helpful but it shows up a few fundamental problems with the data management of PowerBI for a custom visual. There is no documentation and the process from Roles to mapping to visualTransform is horrendous because it takes so much effort to rebuild the data into a format that is usable consistently with D3.
Commenting on user5226582's example, for me, columns is presented in a form where I have to look up the Roles property to be able to understand the order of data presented in the rows column array. displayName offers no certainty. For exampe, if a user uses the same field in two different dataRoles then it all gets crazily awry.
I think the safest approach is to build a new array inside visualTransform using the known well-field names (the "name" property in dataRoles), then iterate columns interrogating the Roles property to establish an index to the rows array items. Then use that index to populate the new array reliably. D3 then gobbles that up.
I know that's crazy, but at least it means reasonably consistent data and allows for the user selecting the same data field more than once or choosing count instead of column value.
All in all, I think this area needs a lot of attention before custom Visuals can really take off.
First of all, there is my script:
import psycopg2
import sys
data = ((160000,),
(40000,),
(75000,),
)
def main():
try:
connection = psycopg2.connect("""host='localhost' dbname='postgres'
user='postgres'""")
cursor = connection.cursor()
query = "UPDATE Planes SET Price=%s"
cursor.executemany(query, data)
connection.commit()
except psycopg2.Error, e:
if connection:
connection.rollback()
print 'Error:{0}'.format(e)
finally:
if connection:
connection.close()
if __name__ == '__main__':
main()
This code works of course, but not in the way I want. It updates entire column 'Price' which is good, but it updates it only by use of the last value of 'data'(75000).
(1, 'Airbus', 75000, 'Public')
(2, 'Helicopter', 75000, 'Private')
(3, 'Falcon', 75000, 'Military')
My desire output would look like:
(1, 'Airbus', 160000, 'Public')
(2, 'Helicopter', 40000, 'Private')
(3, 'Falcon', 75000, 'Military')
Now, how can I fix it?
Without setting up your database on my machine to debug, I can't be sure, but it appears that the query is the issue. When you execute
UPDATE Planes SET Price=%s
I would think it is updating the entire column with the value being iterated on from your data tuple. Instead, you might need the tuple to a dictionary
({'name':'Airbus', 'price':160000}, {'name':'Helicopter', 'price':40000}...)
and change the query to
"""UPDATE Planes SET Price=%(price)s WHERE Name=%(name)s""".
See the very bottom of this article for a similar formulation. To check that this is indeed the issue, you could just execute the query once (cursor.execute(query)) and I bet you will get the full price column filled with the first value in your data tuple.
A very similar post was made about this issue here. In cloudant, I have a document structure storing when users access an application, that looks like the following:
{"username":"one","timestamp":"2015-10-07T15:04:46Z"}---| same day
{"username":"one","timestamp":"2015-10-07T19:22:00Z"}---^
{"username":"one","timestamp":"2015-10-25T04:22:00Z"}
{"username":"two","timestamp":"2015-10-07T19:22:00Z"}
What I want to know is to count the # of unique users for a given time period. Ex:
2015-10-07 = {"count": 2} two different users accessed on 2015-10-07
2015-10-25 = {"count": 1} one different user accessed on 2015-10-25
2015 = {"count" 2} two different users accessed in 2015
This all just becomes tricky because for example on 2015-10-07, username: one has two records of when they accessed, but it should only return a count of 1 to the total of unique users.
I've tried:
function(doc) {
var time = new Date(Date.parse(doc['timestamp']));
emit([time.getUTCFullYear(),time.getUTCMonth(),time.getUTCDay(),doc.username], 1);
}
This suffers from several issues, which are highlighted by Jesus Alva who commented in the post I linked to above.
Thanks!
There's probably a better way of doing this, but off the top of my head ...
You could try emitting an index for each level of granularity:
function(doc) {
var time = new Date(Date.parse(doc['timestamp']));
var year = time.getUTCFullYear();
var month = time.getUTCMonth()+1;
var day = time.getUTCDate();
// day granularity
emit([year,month,day,doc.username], null);
// year granularity
emit([year,doc.username], null);
}
// reduce function - `_count`
Day query (2015-10-07):
inclusive_end=true&
start_key=[2015, 10, 7, "\u0000"]&
end_key=[2015, 10, 7, "\uefff"]&
reduce=true&
group=true
Day query result - your application code would count the number of rows:
{"rows":[
{"key":[2015,10,7,"one"],"value":2},
{"key":[2015,10,7,"two"],"value":1}
]}
Year query:
inclusive_end=true&
start_key=[2015, "\u0000"]&
end_key=[2015, "\uefff"]&
reduce=true&
group=true
Query result - your application code would count the number of rows:
{"rows":[
{"key":[2015,"one"],"value":3},
{"key":[2015,"two"],"value":1}
]}
I'm trying to filter a dataframe for a certain date in a column.
The colum entries are timestamps and I try to construct a boolean vector from those,
checking for a certain date.
I tried:
filterfr = df[((df.expiration.month==6) & (df.expiration.day==22) & (df.expiration.year==2002)]
It doesn't work, because 'Series' object has no attribute 'month'.
How can this be done?
When you do df.expiration, you get back a Series where the items are the expiration datetimes.
Try comparing to an actual datetime.datetime object:
filterfr = df[df['expiration'] == datetime.datetime(2002, 6, 22)]
You may want to look into using a DatetimeIndex, depending on your dataset. This lets you use the convenient syntax
df['2002-06-22']
To have access to the DatetimeIndex methods you have to wrap it in DatetimeIndex (currently*).
The fastest way is to access the day, month and year attributes (just like you attempted):
expir = pd.DatetimeIndex(df['expiration'])
(expir.day == 22) & (expir.month == 6) & (expir.year == 2002)
Alternative, but slower ways are to use the normalize method (to bring it to the start of the day), or to use the date attribute:
pd.DatetimeIndex(df['expiration']).normalize() == datetime.datetime(2002, 06, 22)
pd.DatetimeIndex(df['expiration']).date == datetime.datetime(2002, 06, 22)
*In 0.15 there will be a dt attribute so that you can access these as:
expir = df['expiration']
expir.dt.day ...
This
filterfr = df[df['expiration'] == datetime.datetime(2002, 6, 22)]
worked fine.
However, after doing some filtering, I got an error,
when trying to do filterfr.expiration[0]
or filterfr['expiration'][0]
to get the first element in the series.
KeyError: 0L is raised, although there are elements in the series.
The series looks like this:
Name: expiration, Length: 534668, dtype: datetime64[ns]
Shouldn't this actually always work?