PowerBI Embedded not working because of X-Frame-Options - powerbi

I'm using powerbi-service-js to embed reports in my Angular 8 application. Until October 3rd, everything worked fine. I would log in to the url (https://login.microsoftonline.com/common/oauth2/token) and then make a request to the powerbi API to get the report token. But now, when using pbiService's embed function, I get the following error. I'm using DirectQuery to construct the report and Deploying the Application in Nginx.
That's the error on Chrome console:
Refused to display 'https://app.powerbi.com/tokenRefresh?ver=1570487269987' in a frame because it set 'X-Frame-Options' to 'sameorigin'.
ERROR DOMException: Blocked a frame with origin "https://app.powerbi.com" from accessing a cross-origin frame.
at e.retryTokenRefresh (https://app.powerbi.com/13.0.10956.175/scripts/reportEmbed.min.js:1:2245948)
at e.onTokenRefreshLoad (https://app.powerbi.com/13.0.10956.175/scripts/reportEmbed.min.js:1:2245770)
at HTMLIFrameElement.document.getElementById.onload [as __zone_symbol__ON_PROPERTYload] (https://app.powerbi.com/13.0.10956.175/scripts/reportEmbed.min.js:1:2245299)

And probably you are viewing this using Google Chrome browser? Because since the date you mentioned, it blocks mixed content. So check your URLs and make sure you do not use HTTP, but all of them are HTTPS. You can confirm this theory by viewing your app in another browser.

Related

AWS Glue Notebook: Failed to authenticate user due to missing information in request

I'm getting the following error on the Ohio server:
Failed to authenticate user due to missing information in request.
This suddenly started happening after using Glue for few months.
Error picture here
I've tried adding AWS onto my cookie allow list and still doesn't allow me. I've also tried restarting my machine and using another 2 browsers other than chrome yet still doesn't work. I'm the only one affected on the team it only happens on the Ohio server whilst NV etc work perfectly.
I had the exact same issue. Check that your browser does not block third-party cookies. Any browser that blocks third-party cookies either by default or as a user-enabled setting will prevent notebooks from launching (the typical error message is Failed to authenticate user due to missing information in the request.)
Chrome: Turn Off "Block Third-Party Cookies" in Chrome for Windows
Firefox: Third-party cookies and Firefox tracking protection
Safari: Clear cookies in Safari on Mac

Problem handling cookies for Blazor Server using OpenID server (Keycloak)

I have a baffling issue with cookie handling in a Blazor server app (.NET Core 6) using openid (Keycloak). Actually, more than a couple which are may or may not linked. It’s a typical (?) reverse proxy architecture:
A central nginx receives queries for services like Jenkins, JypyterHub, SonarQube, Discourse etc. These are mapped through aliases in internal IPs where the nginx can access them. This nginx intercepts URL like: https://hub.domain.eu
A reverse proxy which resolves to https://dsc.domain.eu. This forwards request to a Blazor app running in Kestrel in port 5001. Both Kestrel and nginx under SSL – required to get the websockets working.
Some required background: the Blazor app is essentially a ‘hub’ where its various razor pages ‘host’ in iframe-like the above mentioned services. How it works: When the user asks for the root path (https://hub.domain.eu) it opens the root page of the Blazor app (/).
The nav menu contains the links to razor pages which contain the iframes for the abovementioned services. For example:
The relative path is intercepted by the ‘central’ nginx which loads Jenkins. Everything is under the same Keycloak OpenID server. Note that everything works fine without the Blazor app.
Scenarios that cause the same problem
Assume the user logins in my app using the login page of Keycloak (NOT the REST API) through redirection. Then proceeds to link and he is indeed logged in as well. The controls in the App change accordingly to indicate that the user is indeed authenticated. If you close the tab and open a new one, the Blazor app will act as if it’s not logged in while the other services (e.g Jenkins) will show the logged in user from before. When you press the Login link, you’ll be greeted with a 502 nginx error. If you clean the cookies from browser (or in private / stealth mode) everything works again. Or of you just log off e.g. from Jenkins.
Assume that the user is now in a service such as Jenkins, SonarQube, etc. if you press F5 now you have two problems: you get a 404 Error but only on SOME services such as Sonarcube but not in others. This is a side problem for another post. The thing is that Blazor app appears not logged in again by pressing Back / Refresh
The critical part of Program.cs looks like the following:
This class handles the login / logoff:
Side notes:
SaveTokens = false still causes large header errors and results in empty token (shown in the above code with the Warning: Token received was null). I’m still able to obtain user details though from httpContext.
No errors show up in the reverse proxy error.log and in Kestrel (all deployed in Linux)
MOST important: if I copy-paste the failed login link (the one that produced the 502 error) to a "clean" browser, it works fine.
There are lots of properties affecting the OpenID connect, it could also be an nginx issue but I’ve run out of ideas the last five days. The nginx config has been accommodated for large headers and websockets.
Any clues as to where I should at least focus my research to track the error??
The 502 error shows an error at NGINX's side. The reverse proxy had proper configuration but as it turned out, not the front one. Once we set the header size to suggested size, everything played out.

Do CORS restrictions apply to browser windows as well ? HTML Editor:127.0.0.1:5000, Img editor:127.0.0.1:8000. Sending img results back causes a CORS

I have a app on 127.0.0.1:5000 that edits a page (html code)
If I need to edit a picture on that page using a specialized editor I select the picture and then I fire up a call to 127.0.0.1:8000/picture_editor?picture_url="127.0.0.1:5000/static/uploads/picture.jpg
All good so far, I am able to edit the picture and I have code that should send the results back to the parent window and integrate the changes in the editor
The problem is that this triggers a CORS (cross origins resource sharing) security exception and the call does not complete Here is the error:
svg-editor.html?picture_url=http://127.0.0.1:5000/static/uploads/picture.jpg&width=225&height=276:64 Uncaught DOMException: Blocked a frame with origin "http://localhost:8000" from accessing a cross-origin frame.
What are my options to deal with this ? Is there any way to deal with this ? This is not really CORS site to site but rather the browser not allowing the communication betweek two windows that belong to different sites (although only the port differs)
My app is a flask application and I already enabled CORS there
app = Flask(__name__)
cors = CORS(app, resources={r"*": {"origins": "*"}})
But the browser is still reporting the above error.
Yes CORS has is actually specifically about this and it does not allow the code from a browser window accessing one site to interact with the code in another window that was loaded from another site
As far as my problem goes I found that the editor has an ES6 version that can be loaded without running the Node server (in my case the server running on port 8000)
Toying with the CORS setttings for flask and Node.js (have no clue how to do that) proved to be insufficient for Flask (the above did not solve my problem) and proved to be too difficult for me to do it on Node.js which I do not know anything about

nginx API cross origin calls not working only from some browsers

TLDR: React app's API calls are returning with status code 200 but without body in response, happens only when accessing the web app from some browsers.
I have a React + Django application deployed using nginx and uwsgi on a single centOS7 VM.
The React app is served by nginx on the domain, and when users log in on the javascript app, REST API requests are made to the same nginx on a sub domain (ie: backend.mydomain.com), for things like validate token and fetch data.
This works on all recent version of Firefox, Chrome, Safari, and Edge. However, some users have complained that they could not log in from their work network. They can visit the site, so obviously the javascript application is served to them, but when they log in, all of the requests come back with status 200, except the response has an empty body. (and the log in requires few pieces of information to be sent back with the log in response to work).
For example, when I log in from where I am, I would get response with status=200, and a json object with few parameters in the body of the response.
But when one of the users showed me the same from their browser, they get Status=200 back, but the Response is empty. They are using the same version of browsers as I have. They tried both Firefox and Chrome with the same behaviours.
After finally getting hold of one of the user to send me some screenshots. I found the problem. In my browser that works with the site, the API calls to the backend had Referrer Policy set to strict-origin-when-cross-origin in the Headers. However on their browser, the same was showing up as no-referrer-when-downgrade.
I had not explicitly set the referrer policy so the browsers were using each of their default values, and it differed between different versions of browsers (https://developers.google.com/web/updates/2020/07/referrer-policy-new-chrome-default)
To fix this, I added add_header 'Referrer-Policy' 'strict-origin-when-cross-origin'; to the nginx.conf file and restarted the server. More details here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
The users who had trouble before can now access the site API resources after clearing cache in their browsers.

GET request 200 OK but 'failed to load response data' for links

I made a personal website (http://www.soyoungpark.online) using domain bought from GoDaddy and hosted on AWS s3. I set up everything and thought things were working until I put a simple link to my linkedin profile. When I check the network panel, I see that status code is 200 OK but for the response..there is nothing. The code itself doesn't seem to be problematic; it is simple a with href of the desired link. So I am guessing something could be wrong with my AWS s3 settings? Anyone with similar experience?
It's likely that these services include a header option called "X-Frame" that for security prevents them from being loaded within another site:
The X-Frame-Options HTTP response header can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe> or <object> . Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites. Source: X-Frame-Options
This does look to be the case when attempting to view Linkedin per your example:
Refused to display 'https://www.linkedin.com/in/exampleuser' in a frame because it set 'X-Frame-Options' to 'sameorigin'.
That said, applying a target Attribute to each to open in a new tab or window should allow these outside services to be navigated to.
e.g:
<a href="https://www.linkedin.com/in/exampleuser" target="_blank">