PowerBI live dashboard not updating in real time from Rest API - powerbi

I've got a small simple console app pushing data into a PowerBI dataset. The data is going in, but the dashboard does not appear to be updating in real time.
If I manually refresh the dashboard I can see the latest data, but it does not automatically update when I add rows to the table.
I've got a fiddler output of the request/response so I can see data is going across.
POST https://api.powerbi.com/v1.0/myorg/datasets/e6373821-c2ed-438a-967a-febe163dca75/tables/LiveCpu/rows HTTP/1.1
Connection: Keep-Alive
Authorization: Bearer xxxx
Content-Type: application/json; charset=utf-8
Host: api.powerbi.com
Content-Length: 65
Expect: 100-continue
{"rows":[{"Timestamp":"2016-04-29T11:49:01","Value":31.8878784}]}
The response back is
HTTP/1.1 200 OK
Cache-Control: no-store, must-revalidate, no-cache
Transfer-Encoding: chunked
Content-Type: application/octet-stream
Server: Microsoft-HTTPAPI/2.0,Microsoft-HTTPAPI/2.0 Microsoft-HTTPAPI/2.0
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Frame-Options: deny
X-Content-Type-Options: nosniff
RequestId: 9daaabb9-e76d-4684-8ed3-1f6dc37889ab
Date: Fri, 29 Apr 2016 10:48:59 GMT
0
So all looks ok, but the live dashboard is not updating. I can even see messages in the web browser developer tools showing the request id has gone through, but no live updates.

It appears the problem was that I pinned an entire report to a dashboard rather than an individual report tile. Single report tiles do not appear to support automatic refresh.

Related

Instagram Graph API Unknown error fetching conversations

I followed all of the steps in the getting started page from the Instagram Messaging docs found here (https://developers.facebook.com/docs/messenger-platform/instagram/get-started). I even enabled message control tools and was able to successfully perform GET requests on all steps mentioned in the docs except for GETing the conversations? from the Graph API.
My request was
curl -i -X GET
"https://graph.facebook.com/v9.0/xxxxx/conversations?platform=instagram&access_token=EAA..."
And my response was
HTTP/2 500
content-type: application/json; charset=UTF-8
access-control-allow-origin: *
facebook-api-version: v13.0
strict-transport-security: max-age=15552000; preload
pragma: no-cache
cache-control: private, no-cache, no-store, must-revalidate
expires: Sat, 01 Jan 2000 00:00:00 GMT
x-fb-request-id: AWxxxx
x-fb-trace-id: Gxxxxx
x-fb-rev: 1xxxxx
x-fb-debug: Icxxxxx
content-length: 77
date: Tue, 21 Jun 2022 04:11:42 GMT
alt-svc: h3=":443"; ma=86400, h3-29=":443"; ma=86400
{"error":{"code":1,"message":"An unknown error occurred","error_subcode":99}}
I'm wondering why since I followed everything up to here and it was working. Any suggestions on what I could've missed or did wrong? Thanks
Missing permissions can cause this issue. It is listed as the first issue under the error codes section. Make sure you give the user access token appropriate permissions before you create the token. It says moderate but the permissions required are:
From Facebook Login:
instagram_basic
instagram_manage_messages
pages_manage_metadata
Remember, your Facebook Developer account must be able to perform Tasks with atleast "Moderate" level access on the Facebook Page connected to the Instagram account you want to query.

PowerBI publish to Service 500 error "We don't support the option 'HierarchicalNavigation'" (any more?)

I have a report which was working fine in PowerBI Service.
As of this morning it is failing. When I try republish I get a 500 error, looking at the error using Telerik Fiddler it says "We don't support the option 'HierarchicalNavigation'". I have no preview features turned on, and the refresh / dataset works in Desktop. I have submitted on the PowerBI forum but adding it here to see if anyone has run in to it before.
Screen shots and http trace of the publish below.
HTTP/1.1 500 Internal Server Error
Content-Length: 443
Content-Type: application/json; charset=utf-8
X-PowerBI-Error-Info: ModelRefresh_ShortMessage_ProcessingError
X-PowerBI-Error-Details: {"error":{"code":"ModelRefresh_ShortMessage_ProcessingError","pbi.error":{"code":"ModelRefresh_ShortMessage_ProcessingError","parameters":[],"details":[{"code":"ModelRefresh_ProcessingErrorLabel","detail":{"type":1,"value":"COM error: Microsoft.Data.Mashup, We cannot generate a query for the given data source reference. Failure reason: We don't support the option 'HierarchicalNavigation'.\u000d\u000aParameter name: HierarchicalNavigation.. "}}]}}}
Strict-Transport-Security: max-age=31536000; includeSubDomains
X-Frame-Options: deny
X-Content-Type-Options: nosniff
Access-Control-Expose-Headers: RequestId,X-PowerBI-Error-Info,X-PowerBI-Error-Details
RequestId: 825a548f-1b27-492c-a5f7-accd650fe41f
Date: Wed, 24 Jun 2020 09:07:57 GMT
{"error":{"code":"ModelRefresh_ShortMessage_ProcessingError","pbi.error":{"code":"ModelRefresh_ShortMessage_ProcessingError","parameters":{},"details":[{"code":"ModelRefresh_ProcessingErrorLabel","detail":{"type":1,"value":"COM error: Microsoft.Data.Mashup, We cannot generate a query for the given data source reference. Failure reason: We don't support the option 'HierarchicalNavigation'.\r\nParameter name: HierarchicalNavigation.. "}}]}}}

BigQuery upload job returning errors - payload parts count wrong?

We are experiencing upload errors to BigQuery / cloud storage:
REQUEST
POST https://www.googleapis.com/upload/bigquery/v2/projects/XXX HTTP/1.1
Content-Type: multipart/related; boundary="PART_TAG_DATA_IMPORTER"
Host: www.googleapis.com
Content-Length: 652
--PART_TAG_DATA_IMPORTER
Content-Type: application/json; charset=UTF-8
{"configuration":{"load":{"createDisposition":"CREATE_IF_NEEDED","destinationTable":{"datasetId":"XX","projectId":"XX","tableId":"XX"},"schema":{"fields":[{"mode":"required","name":"xx1","type":"INTEGER"},{"mode":"required","name":"xx2","type":"STRING"},{"mode":"required","name":"xx3","type":"INTEGER"}]},"skipLeadingRows":1,"sourceFormat":"CSV","sourceUris":["gs://XXX/9f41d369-b63e-4858-9108-7d1243175955.csv"],"writeDisposition":"WRITE_TRUNCATE"}}}
--PART_TAG_DATA_IMPORTER--
RESPONSE:
HTTP/1.1 400 Bad Request
X-GUploader-UploadID: XXX
Content-Length: 77
Date: Fri, 15 Nov 2019 10:23:33 GMT
Server: UploadServer
Content-Type: text/html; charset=UTF-8
Alt-Svc: quic=":443"; ma=2592000; v="46,43",h3-Q050=":443"; ma=2592000,h3-Q049=":443"; ma=2592000,h3-Q048=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000
Payload parts count different from expected 2. Request payload parts count: 1
Anyone else receiving this? Everything worked fine since last night. There were no changes in our codebase and error is happening in about 80% of the cases but after 5-6 attempts it (sometimes) goes through.
We are using .NET and have the latest Google.Apis libraries but this is reproducible by simple request to the server. It also sometimes goes through normally.
Google has added check in /upload/bigquery/v2/projects/{projectId}/jobs endpoint a rule that it cannot receive single part message.
/bigquery/v2/projects/{projectId}/jobs needs to be used when doing upload from GCS as per this documentation here (which does not say this explicitly):
https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/insert
This looks quite odd. It appears you're using the inline upload endpoint but you're passing a reference to a GCS object in the load config, and not sending an inline upload.
Could you share a snippet of how you're constructing this from the .NET code?

Python urllib2 - Freezes when connection temporarily dies

So, I'm working with urllib2, and it keeps freezing on a specific page. Not even Ctrl-C will cancel the operation. It's throwing no errors (I'm catching everything), and I can't figure out how to break it. Is there a timeout option for urllib2 that defaults to never?
Here's the procedure:
req = urllib2.Request(url,headers={'User-Agent':'...<chrome's user agent string>...'})
page = urllib2.urlopen(req)
// p.s. I'm not installing any openers
Then, if the internet gets cut partway through the second line (which downloads it), even if connection is restored, this freezes the program completely.
Here's the response header I get in my browser (Chrome) from the same page:
HTTP/1.1 200 OK
Date: Wed, 15 Feb 2017 18:12:12 GMT
Content-Type: application/rss+xml; charset=UTF-8
Content-Length: 247377
Connection: keep-alive
ETag: "00e0dd2d7cab7cffeca0b46775e1be7e"
X-Robots-Tag: noindex, follow
Link: ; rel="https://api.w.org/"
Content-Encoding: gzip
Vary: Accept-Encoding
Cache-Control: max-age=600, private, must-revalidate
Expires: Wed, 15 Feb 2017 18:12:07 GMT
X-Cacheable: NO:Not Cacheable
Accept-Ranges: bytes
X-Served-From-Cache: Yes
Server: cloudflare-nginx
CF-RAY: 331ab9e1443656d5-IAD
p.s. The url is to a large WordPress feed which, according to the response, appears compressed.
According to the docs, the default timeout is, indeed, no timeout. You can specify a timeout when calling urlopen though. :)
page = urllib2.urlopen(req, timeout=30)

How to find out where a cookie is set?

I am trying to find out where a cookie is being set.
I am running Varnish cache and want to know where the cookie is being set so I know if I can safely remove it for caching purposes.
The response headers look like this;
HTTP/1.1 200 OK
Server: Apache/2.2.17 (Ubuntu)
Expires: Mon, 05 Dec 2011 15:11:39 GMT
Cache-Control: no-store, max-age=7200
Vary: Accept-Encoding
Content-Type: text/html; charset=UTF-8
X-Session: NO
X-Cacheable: YES
Date: Tue, 04 Dec 2012 15:29:40 GMT
X-Varnish: 1233768756 1233766580
Age: 1081
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT
There is no cookie present. But when loading the same page in a browser the headers are the same, I get a cache hit and no cookie in the response headers.
But then the cookie is there all of a sudden, so it must be being somewhere. Even if I remove it it reappears. It even appears in Incognito mode in Chrome. But it is not in the header response.
I have been through all the javascript on the site and cannot find anything, is there any other way of setting a cookie?
Thanks.
If the Set-Cookie header goes through Varnish at some point, you can use varnishlog to find the request URL:
$ varnishlog -b -m 'RxHeader:Set-Cookie.*COOKIENAME'
This will give you a full varnishlog listing for the backend requests, including the TxURL to the backend which tells you what the client asked for when it got Set-Cookie back.