I'm trying to configure cubesviewer and try out the setup.
I've got the app installed running, along with cubes slicer app too.
However, when I visit the home page
http://127.0.0.1:8000/cubesviewer/
it fails popping up an error "Error occurred while accessing the data server"
Debugging with the browser console, shows a http status 403 error with the url http://localhost:8000/cubesviewer/view/list/
After some googling and reading, I figured I'll need to add rest frame auth settings. (as mentioned here.).
Now after running migrate and runserver, I get 401 error on that url.
Clearly I'm missing something with settings.py , Can somebody help me out.
I'm using the cubesviewer tag v0.10 from the github repo.
And find my settings here. http://dpaste.com/2G5VB5K
P.S: I've verified Cubes slicer works separately on its' own.
I have reproduced this. This is error may occur when you use different URL to access a website and to access related resources. For security reasons, browsers allow to access resources from exactly the same host as the page you are viewing.
Seems you are accessing the app via http://127.0.0.1:8000, but you have configured CubesViewer to tell clients to access the data backend via http://localhost:8000. While it's the same IP address, they are different strings.
Try accessing the app as http://localhost:8000.
If you deploy to a different server, you need to adjust settings. Here are the relevant configuration options, now with more comments:
# Base Cubes Server URL.
# Your Cubes Server needs to be running and listening on this URL, and it needs
# to be accessible to clients of the application.
CUBESVIEWER_CUBES_URL="http://localhost:5000"
# CubesViewer Store backend URL. It should point to this application.
# Note that this must match the URL that you use to access the application,
# otherwise you may hit security issues. If you access your server
# via http://localhost:8000, use the same here. Note that 127.0.0.1 and
# 'localhost' are different strings for this purpose. (If you wish to accept
# requests from different URLs, you may need to add CORS support).
CUBESVIEWER_BACKEND_URL="http://localhost:8000/cubesviewer"
Alternatively, you could change CUBESVIEWER_BACKEND_URL to "http://127.0.0.1:8000/cubesviewer" but I recommend you to use hostnames and not IP addresses for this.
Finally, I haven't yet tested with CORS support, but check this pull request if you wish to try that approach.
Related
I have a baffling issue with cookie handling in a Blazor server app (.NET Core 6) using openid (Keycloak). Actually, more than a couple which are may or may not linked. It’s a typical (?) reverse proxy architecture:
A central nginx receives queries for services like Jenkins, JypyterHub, SonarQube, Discourse etc. These are mapped through aliases in internal IPs where the nginx can access them. This nginx intercepts URL like: https://hub.domain.eu
A reverse proxy which resolves to https://dsc.domain.eu. This forwards request to a Blazor app running in Kestrel in port 5001. Both Kestrel and nginx under SSL – required to get the websockets working.
Some required background: the Blazor app is essentially a ‘hub’ where its various razor pages ‘host’ in iframe-like the above mentioned services. How it works: When the user asks for the root path (https://hub.domain.eu) it opens the root page of the Blazor app (/).
The nav menu contains the links to razor pages which contain the iframes for the abovementioned services. For example:
The relative path is intercepted by the ‘central’ nginx which loads Jenkins. Everything is under the same Keycloak OpenID server. Note that everything works fine without the Blazor app.
Scenarios that cause the same problem
Assume the user logins in my app using the login page of Keycloak (NOT the REST API) through redirection. Then proceeds to link and he is indeed logged in as well. The controls in the App change accordingly to indicate that the user is indeed authenticated. If you close the tab and open a new one, the Blazor app will act as if it’s not logged in while the other services (e.g Jenkins) will show the logged in user from before. When you press the Login link, you’ll be greeted with a 502 nginx error. If you clean the cookies from browser (or in private / stealth mode) everything works again. Or of you just log off e.g. from Jenkins.
Assume that the user is now in a service such as Jenkins, SonarQube, etc. if you press F5 now you have two problems: you get a 404 Error but only on SOME services such as Sonarcube but not in others. This is a side problem for another post. The thing is that Blazor app appears not logged in again by pressing Back / Refresh
The critical part of Program.cs looks like the following:
This class handles the login / logoff:
Side notes:
SaveTokens = false still causes large header errors and results in empty token (shown in the above code with the Warning: Token received was null). I’m still able to obtain user details though from httpContext.
No errors show up in the reverse proxy error.log and in Kestrel (all deployed in Linux)
MOST important: if I copy-paste the failed login link (the one that produced the 502 error) to a "clean" browser, it works fine.
There are lots of properties affecting the OpenID connect, it could also be an nginx issue but I’ve run out of ideas the last five days. The nginx config has been accommodated for large headers and websockets.
Any clues as to where I should at least focus my research to track the error??
The 502 error shows an error at NGINX's side. The reverse proxy had proper configuration but as it turned out, not the front one. Once we set the header size to suggested size, everything played out.
Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😭 but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.
I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilà.
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html
I have two instances of Odoo in a server in the cloud. If I make the following steps I get "Internal Server Error":
I make login in the first instance (http://111.222.33.44:3333)
I close the session
I load the address of the second instance in the same browser (http://111.222.33.44:4444)
If I want to work in the second instance (in another port), I need to remove the browser cookies first to acces to the other Odoo instance. If do this everything works fine.
If I load them in differents browsers (Firefox and Chromium) at the same time, they work well as well.
It's not a NginX issue because I tried with and without it.
Is there a way to solve this permanently? Is this the expected behaviour?
If you have access to the sourcecode you can change this file like shown below and check if the issue is solved or not.
addons/web/controllers/main.py
if db != request.session.db:
request.session.logout()
request.session.db = db
abort_and_redirect(request.httprequest.url)
And delete --> request.session.db = db
which is below this IF statement.
Try following changes in:
openerp/addons/base/ir/ir_http.py
In method _handle_exception somewhere around line 140 you will find this piece of code:
attach = self._serve_attachment()
if attach:
return attach
Replace it with:
if isinstance(exception, werkzeug.exceptions.HTTPException) and exception.code == 404:
attach = self._serve_attachment()
if attach:
return attach
You can perfectly well serve all the databases with a single OpenERP server on your machine. Unfortunately you did not mention what error you were seeing and what you expected as a result - makes it a bit harder to help you ;-)
Anyway, here are some random ideas based on the information you provided:
If you have a problem with OpenERP not listening on all interfaces, try to specify 0.0.0.0 as the xmlrpc_interface in the configuration file, this should have OpenERP listen on 8069 on all IPs.
Note that Apache is not relevant if you're connecting to e.g. http://www.sample.com:8069/?db=openerp because you're directly connecting to OpenERP. If you want to go through Apache, you need to setup ReverseProxy rules in your vhost configs, and OpenERP does not need to listen to all public IPs then.
OpenERP 6.1 and later can autodetect the database name based on the virtual host name, and filter the name of the available databases: you need to start it with the --db-filter parameter, which represents a pattern used to filter the list of available databases. %h represents the domain name and %d is the first domain component of that domain. So for example with --db-filter=^%d$ I will only see the test database if I end up on the server using http://test.example.com:8069. If there's only one database match, the list is not displayed and the user will directly end up on the right database. This works even behind Apache reverse proxies if you make sure that OpenERP see the external hostname, i.e. by setting a X-Forwarded-Host header in your Apache proxy config and enabling the --proxy mode of OpenERP.
The port reuse problem comes because you are trying to start multiple OpenERP servers on the same interface/port combination. This is simply not possible unless you are careful to start just one server per IP with the IP set in the xmlrpc_interface parameter, and I don't think you need that. The named-based virtual hosts that Apache supports are all handled by a single master process that listens on port 80 on all the interfaces. If you want to do the same with OpenERP you only need to start one OpenERP server for all your domains, and make it listen on 0.0.0.0, port 8069, as I explained above.
On top of that it's not clear what you would have set differently in the various config files. Running 40 different OpenERP servers on the same machine with identical code sounds like a lot of overkill. OpenERP is designed to be multi-tenant so that many (read: hundreds) of databases can be served from the same server.
Finally I think this is the expected behaviour. The cookies of all websites are stored specifically for each website (for each domain) in the web browser. So if I only change the port the cookies of the first instance are in conflict with the cookies of the other instance because the have the same domain (111.222.33.44 in my example).
So there are some workarounds:
Change Domain Locally
Creating a couple of domain name in my laptop in /etc/hosts:
111.222.33.44 cloud01
111.222.33.44 cloud02
Then the cookies don't interfere with each other anymore. To access to each instance
http://cloud01:3333
http://cloud02:4444
Broswer Extension. Multilogin or Multiaccount
There is another workaround. If I use this chromium extension the problem disappears because the sessions are treated separately:
SessionBox
I am trying to get a Linksys router with the latest DD-WRT (v24-sp2) in my house connected, via Comcast, to an external Squid (v3) proxy that I am running on AWS. When I connect over the WiFi to the DD-WRT router, it connects to the Squid proxy, but I get the nasty message (abbreviated here to show relevant part):
While trying to retrieve the URL: /
Note the backlash. I get this when I go to a root domain, like www.cnn.com. If I go to a page under a site, like www.cnn.com/today (fake link used for example only), that returns and error like:
While trying to retrieve the URL: /today
Again, notice the "/today", as if the root domain has been removed, and the string to the right of the domain name is being searched on.
For some background, I have installed Squid as generally as possible, and have done it on two servers with the same results. I get this same error no matter what domain I go to. Also, if I switch my network on my Mac to use this Squid proxy, it works fine. Only the connections from the DD-WRT give this error.
I have tried the instructions on the DD-WRT site with no luck. Others seem to have gotten this working well, so I assume I am making a configuration mistake.
Any clues for me? TIA...