Ember App: Looking for a workaround for Safari bug that ignores the '#' in my url on https redirects by Cloudfront - ember.js

UPDATE 2:
I found the reason, but not a solution yet. This has to do with how safari browsers deal with the '#' when Cloudfront redirects HTTP to HTTPS. Safari ignores the hashtag, and apparently this is a bug that's existed in Safari for years. I'm not 100% sure this is my issue, but it seems to be. Still looking for a solution.
END UPDATE 2
For some reason I'm having trouble figuring out, Safari browsers (mobile and desktop) act differently from Chrome and Firefox when I refresh a page or try to access a route directly on my app.
I have a playlists route:
Router.map(function() {
...
this.resource("playlists", function () {});
...
});
I can hit the playlists route directly with rooturl.com/playlists on Chrome and Firefox and in the console logs, I see this:
Attempting URL transition to /playlists
When I try to hit the playlists route directly in Safari, I see this:
Attempting URL transition to /
Another strange thing is that when I use my localhost the transition is correct on all browsers including safari (mobile and desktop). This makes me think it has something to do with the production environment. I'm using AWS S3 and Cloudfront, but I'm not fully sure that has anything to do with this.
I can provide more information here if asked.
UPDATE:
When I use the fragment (followed by a '#') in the url, safari redirects correctly. So this redirects correctly:
example.com/#/playlists
But this does not:
example.com/playlists
Again, this problem only occurs in production, on AWS S3/Cloudfront. On localhost, Safari works as expected.

In my case, this was fixed by making sure the protocol is not switched from https to http at any point. Specifically, S3 was issuing a redirect over http while the original request was over https. This leads to Safari stripping the request from any (potentially) sensitive data.
Fixed it by adding https to our S3 redirect setup.

Related

Keystone session cookie only working on localhost

Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😭 but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.

nginx API cross origin calls not working only from some browsers

TLDR: React app's API calls are returning with status code 200 but without body in response, happens only when accessing the web app from some browsers.
I have a React + Django application deployed using nginx and uwsgi on a single centOS7 VM.
The React app is served by nginx on the domain, and when users log in on the javascript app, REST API requests are made to the same nginx on a sub domain (ie: backend.mydomain.com), for things like validate token and fetch data.
This works on all recent version of Firefox, Chrome, Safari, and Edge. However, some users have complained that they could not log in from their work network. They can visit the site, so obviously the javascript application is served to them, but when they log in, all of the requests come back with status 200, except the response has an empty body. (and the log in requires few pieces of information to be sent back with the log in response to work).
For example, when I log in from where I am, I would get response with status=200, and a json object with few parameters in the body of the response.
But when one of the users showed me the same from their browser, they get Status=200 back, but the Response is empty. They are using the same version of browsers as I have. They tried both Firefox and Chrome with the same behaviours.
After finally getting hold of one of the user to send me some screenshots. I found the problem. In my browser that works with the site, the API calls to the backend had Referrer Policy set to strict-origin-when-cross-origin in the Headers. However on their browser, the same was showing up as no-referrer-when-downgrade.
I had not explicitly set the referrer policy so the browsers were using each of their default values, and it differed between different versions of browsers (https://developers.google.com/web/updates/2020/07/referrer-policy-new-chrome-default)
To fix this, I added add_header 'Referrer-Policy' 'strict-origin-when-cross-origin'; to the nginx.conf file and restarted the server. More details here: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Referrer-Policy
The users who had trouble before can now access the site API resources after clearing cache in their browsers.

Mobile safari: iframe missing cookies

I have a web app located on domain A which contains an iframe on domain B. The request to the src on domain B has some Set-Cookie headers. If i load this web app with Safari, or chrome, I can see the cookies set from the iframe request in developer tools. However, if i visit the same page on the iOS simulator (iOS 12), the cookies are not set and I get auth errors (due to missing cookies). I haven't had any luck finding anything online about this behaviour so I have no idea how to work around it. I feel like I must be missing something because this seems like it would be a giant missing feature.
Unfortunately, i haven't had time to setup a simple reproduction for this issue.
Any kind of advice would help.
The issue is with Safari iOS do not allow setting cookies from domain B, unless you "explicitly visit" B. Workaround is to visit the iframe domain and set blank cookie there, then bounce back to original A domain. Afterwards your "Set-Cookie" directives or whatever you use to set cookies will be allowed by iOS.
Check out this solution and discussion (also includes reproduction setup):
https://gist.github.com/iansltx/18caf551baaa60b79206

Single Sign On in FF or Chrome creates 502 NGINX Error while IE works

I have a Django, NGINX setup that integrates with Single-Sign-On. Recently we had to change domain names, and are using Akamai to spoof the new URL, while the old domain still resolved to our loadbalancer.
SSO attempts to log in are successful in IE but in Chrome or Firefox there is instead a 502 error.
When IE logs in there is a post from oktapreview.com that generates a 302.
When its firefox or Chrome, there are 3 consecutive posts from oktapreview.com that each creates a 502. The first 2 posts have identical timestamps and the 3rd is 3-4 seconds later. For both Firefox and Chrome, upon refreshing the user finds they are actually logged in.
Any advice on what is causing this? Why are there 3 logs of posts from the SSO server? Why would IE (not edge, but IE) work while Chrome and FF fail?
For future seekers here is the resolution:
SSO creates a huge URL string, which first hits the server in its HTTP buffer. My NGINX and uWSGI http buffers were at default levels, around 4kb each. But the Okta SSO was creating URLs that were something like 20KB in their own right. I had to expand the HTTP buffers for both pieces of software to prevent chopping that string into bits. The error message was an unhelpful 500 error, but was resolved with expanded buffers. In short, keep in mind that SSO adds a lot to an HTTP header.

Youtube not able to play on my django-heroku app. Giving me a mixed content error message

I tried to view youtube videos on my app and it didn't work. I checked the console and got this error message
Mixed Content: The page at'https://hispanicheights.herokuapp.com/blog/youtube-video/'
was loaded over HTTPS,but requested an insecure script
'http://content.jwplatform.com/libraries/WQWJdvRx.js'.
This request has been blocked; the content must be served over HTTPS.
Is there a way around this or is this just the situation until I get a paid account with a domain?
This has nothing to do with Heroku, paid plans or not. It is simply that you are linking to an http resource inside a page that is served by https; since that potentially side steps the man-in-the-middle protection that https gives you, modern browsers forbid it.
The solution is to serve all your dependent scripts via https as well.