i'm making a web app in a restful way.
For my client side i'm using angularJs, the same is hosted at - lets say -
https://domain.com
My backend is built on spring boot and i call all the resources from a subdomain - lets say -
https://xyz.domain.com
Now when a user logs in , the backend sends an http only cookie to the client.
I can see the cookie in response header but its not being set in the browsers cookie.
After a bit of research, i have tried sending cookie with domain = .domain.com
but that didnt work either.
Is there a way i can set cookie coming from xyz.domain.com for my client side at domain.com
(Note - i'm not using www.domain.com )
Any help or clue would be great.
Thank you for going through my question.
The problem you're describing is related to cross domain cookie policies. I don't know your exact use-case, but looking at CORS and P3P headers should give you a good start. As an option, you can try setting your cookie manually via Javascript.
Making CORS working isn't enough, you also need to enable withCredentials in angular.
https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/withCredentials
Example:
angular.module('example', []).config(function ($httpProvider) {
$httpProvider.defaults.withCredentials = true;
});
Related
Infra of system
Expected:
I want to block requests, which is not from Server FE (domain.com)
Ex: Users make request from another apps such as Postman -> it will response 403, message access denied.
I used the rules of ALB, it works but users can cheat on Postman
Also I use AWS WAF to detect request. But it's not work.
Is there any way to block request from Postman or another apps?
We can generate secret_key and check between Server FE and Server BE. But users can see it on Headers and simulator the headers on Postman and call API success.
Current Solution:
I use Rule of Application Load Balancer to check Host and Origin. But users can add these params on Postman and request success.
Rule ALB
When I add Origin matching value (set on ALB) -> We can request successful
Postman success
Postman denied
Users can cheat and call API success.
Thanks for reading. Please help me give any solution for this one. Thanks a lot.
No. HTTP servers have no way to know what client is being used to make any HTTP request. Any HTTP client (Browsers, PostMan, curl, whatever) is capable of making exactly the same requests as each other.
The user-agent header is a superficial way to do this, but it's easy enough for PostMan or any other HTTP client to spoof the user-agent header to one that makes the request look like it is coming from a web browser agent.
You can only make it more challenging to do so. Some examples to thwart this behavior includes using tools like Google captcha or CloudFlare browser integrity check, but they're not bulletproof and ultimately aren't 100% effective at stopping people from using tools/automation to access your site in unintended ways. At the end of the day, you're limited to what can be done with HTTP, and PostMan can do everything at the HTTP layer.
Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😠but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.
If I set the Api Key restrictions to "None", the service works great. If I set the Http referrers to websites, it works as expected with certain websites. If I set the Http referrers to the Urls of Web API Servers, I get a "restricted" message. Does anyone know how to allow the Url of the Web API Server to make a successful call when restrictions are being used? I would think that api.somedomain.com would work.
Looks like it might not be possible. Wow, what a shame! Hopefully, there is an update or workaround for this.
How to set Google API key restriction - HTTP referrers
By the way, this doesn't work either. This is an example in their documentation.
():somedomain.com/
(*): .somedomain.com/
I have to write the full sub domain to all my website Urls.
Thanks in advance!
What I ended up doing is creating another Api Key for my Web API Server requests. Since this key isn't displayed in a website, I shouldn't have to lock it down.
I have mysite.com and mysite.nl.
I want to build single sign-on, someone signing in on .com should be signed in in .nl.
I do this by putting an image (1 pixel transparent PNG image) on the .nl domain which sends back a cookie in the response.
In my firefox dev tools, I see 'response cookie' and it's set. It looks like this:
I have made sure the domain is set to mysite.nl
But somehow, when I then navigate to mysite.nl I don't see the cookie set. Am I missing something? I tried disabling tracker blocking, but to no avail.
Google is doing it this way as well right? Ie., log in in Google and you're logged in in Youtube.
If the browser makes a request to xyz.mysite.com, it has to drop the domain cookie for mysite.nl. This is due to the browser security model. If you want to achieve Single Sign On between xyz.mysite.com and xyz.mysite.nl you need some technology to 'transfer' the session token between the two domains. Either you use a standards-backed technology like SAML or OIDC or you use a proprietary mechanism. If you carefully look at the HTTP response, you will see two Set-Cookie HTTP response headers, one has domain property set to mysite.com, one has set domain property to mysite.nl.
I jmeter.properties I set "CookieManager.check.cookies=false" but cross domain cookies still aren't working.
For example going this guide and using their demo site setting a cookie with a domain of "blazedemo.com" works, but if I change the domain to anything else it fails.
JMeter sends only cookies that match the domain of server in the request.
The property you've set impact the way JMeter read cookies not the way it writes them.
To check, emit a http request towards one host for which you created the cookie, you'll see it works.