Is it possible to retrieve the client's SSL certificate from the current connection in Django?
I don't see the certificate in the request context passed from the lighttpd.
My setup has lighttpd and django working in fastcgi mode.
Currently, I am forced to manually connect back to the client's IP to verify the certificate..
Is there a clever technique to avoid this? Thanks!
Update:
I added these lines to my lighttpd.conf:
ssl.verifyclient.exportcert = "enable"
setenv.add-request-header = (
"SSL_CLIENT_CERT" => env.SSL_CLIENT_CERT
)
Unfortunately, the env.SSL_CLIENT_CERT fails to dereference (does not exist?) and lighttpd fails to start.
If I replace the "env.SSL_CLIENT_CERT" with a static value like "1", it is successfully passed to django in the request.META fields.
Anything else, I could try? This is lighttpd 1.4.29.
Yes. Though this question is not Django specific.
Usually web servers have option to export SSL client-side certificate data as environment variables or HTTP headers. I have done this myself with Apache (not Lighttpd).
This is how I did it
On Apache, export SSL certificate data to environment variables
Then, add a new HTTP request headers containing these environment variables
Read headers in Python code
http://redmine.lighttpd.net/projects/1/wiki/Docs_SSL
Looks like the option name is ssl.verifyclient.exportcert.
Though I am not sure how to do step 2 with lighttpd, as I have little experience on it.
Related
Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😠but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.
I have an Angular app using Electron as the desktop wrapper. And there's a separate Django backend which provides HTTP APIs to the Electron client.
So normally when I call the login API the response header will have a Set-Cookie field containing the sessionId. And I can clearly see that sessionId in Postman, however, I can't see this cookie in my Angular app (Dev tools of Electron).
After some further debugging I noticed a warning sign beside my Set-Cookie in dev tools. It said that the cookie is blocked due to the SameSite being set to Lax. So I found a way to modify the server code to return a None samesite (together with a Secure property; I'm using HTTP):
# settings.py
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_SAMESITE = 'None'
which did work (and the warning sign is gone) but the cookie is still not visible.
So what's the problem here? Why can't (and How can) I see that cookie so as to make sure that the login works in the actual client, not just Postman?
(btw, now both ends are being developed in localhost.)
There's no need to worry. A good way to check if it works is to actually make a request that requires login (after the API has been Postman tested) and see if the desired data are returned. If so, you are good to go (especially when the warning is gone).
If the sessionId cookie is saved it should automatically be included in the request. Unless there's something wrong with the cookie's path; but a / path would be fine.
Why is the cookie not visible: it's probably due to the separation of front and back ends. In Electron, the pages are typically some local HTML files, as one common step during configuration is to probably modify loadURL or something like that in main.js, for instance:
mainWindow.loadURL(`file://${__dirname}/dist/your-project/index.html`);
So the "site" you are accessing from Electron can be considered as local filesystem (which has no domain and hence no cookie at all), and you should see an empty file:// entry in dev tools -> application -> storage -> cookie. It doesn't mean a local path containing all cookies of the Electron app. Although your backend may be on the same local machine, you are accessing as http:// instead of file:// so the browser (Electron) will treat it as an actual web server.
Therefore, your cookies should be stored in another entry like http(s)://localhost and you can't see it in Electron. (Note that the same cookie will work in both HTTP and HTTPS)
If you use Chrome instead to test, you may be able to see it in all cookies. In some cases where the frontend and backend are deployed to the same host you may see the cookie in dev tools. But I guess there're always some reasons why you need Electron to create a desktop app (e.g. Python scripts).
Further reading
Using HTTPS
Although moving to HTTPS does not necessarily solve the original problem, it may be worth doing in order to prevent potential problems and get ready for the publish.
In your case, for the backend, you can use django-sslserver as a temporary solution before getting your SSL, but it uses a self-signed certificate and may make your frontend complain.
To fix this, consider adding the following code to the main process:
# const { app } = require('electron');
if (!app.isPackaged) {
app.commandLine.appendSwitch('ignore-certificate-errors');
}
Now it provides a good way to distinguish between development (unpacked) and production (packed) and only disables certificate check in development in order to make the code work.
Assuming that SESSION_COOKIE_SECURE in your config refers to cookie's secure flag, You ll have to set
SESSION_COOKIE_SECURE = False
because if this flag is set to True the browser will allow this cookie to be set only if you are using an https connection.
PS: This is just for your localhost. Hopefully you ll be using an Https connection in other environments.
I'm using Django's request.get_host() in the view code to differentiate between a dynamic number of domains.
For example, if a request comes from www.domaina.com, that domain is looked up in a table and content related to it is returned.
I'm running certbot programmatically to generate the LetsEncrypt certificate (including acme challenge via Django). I store the cert files as base64 strings in PostgreSQL.
That works perfectly fine, but I can't figure out how to 'apply' the certificate on a dynamic per-domain basis.
I know that this is normally done using TLS termination, nginx or even in gunicorn. But that's not dynamic enough for my use-case.
The same goes for wildcard or SAN certificates (not dynamic enough)
So the question is:
Given I have valid LetsEncrypt certs, can I use them to secure Django views at runtime?
Django works as a wsgi server. Django gets an http request and does some work of its own. Then it hands it over to middleware and then to your views.
I'm fairly certain that the generic work django does at the start, that it already requires a regular http request, not a "binary blob of unreadable encrypted stuff".
Perhaps gunicorn can handle https termination, but I'm not sure.
Normally, nginx or haproxy is used. Also because it is something that needs to be really secure.
I'm using haproxy now, which has a handy feature that you can just point it at a directory full of *.pem certificate files and it will read them and use them. So if you could write the certs to such a dir and make sure haproxy is reloaded every time a certificate gets changed, you could be pretty close to a dynamic way of working.
I have two instances of Odoo in a server in the cloud. If I make the following steps I get "Internal Server Error":
I make login in the first instance (http://111.222.33.44:3333)
I close the session
I load the address of the second instance in the same browser (http://111.222.33.44:4444)
If I want to work in the second instance (in another port), I need to remove the browser cookies first to acces to the other Odoo instance. If do this everything works fine.
If I load them in differents browsers (Firefox and Chromium) at the same time, they work well as well.
It's not a NginX issue because I tried with and without it.
Is there a way to solve this permanently? Is this the expected behaviour?
If you have access to the sourcecode you can change this file like shown below and check if the issue is solved or not.
addons/web/controllers/main.py
if db != request.session.db:
request.session.logout()
request.session.db = db
abort_and_redirect(request.httprequest.url)
And delete --> request.session.db = db
which is below this IF statement.
Try following changes in:
openerp/addons/base/ir/ir_http.py
In method _handle_exception somewhere around line 140 you will find this piece of code:
attach = self._serve_attachment()
if attach:
return attach
Replace it with:
if isinstance(exception, werkzeug.exceptions.HTTPException) and exception.code == 404:
attach = self._serve_attachment()
if attach:
return attach
You can perfectly well serve all the databases with a single OpenERP server on your machine. Unfortunately you did not mention what error you were seeing and what you expected as a result - makes it a bit harder to help you ;-)
Anyway, here are some random ideas based on the information you provided:
If you have a problem with OpenERP not listening on all interfaces, try to specify 0.0.0.0 as the xmlrpc_interface in the configuration file, this should have OpenERP listen on 8069 on all IPs.
Note that Apache is not relevant if you're connecting to e.g. http://www.sample.com:8069/?db=openerp because you're directly connecting to OpenERP. If you want to go through Apache, you need to setup ReverseProxy rules in your vhost configs, and OpenERP does not need to listen to all public IPs then.
OpenERP 6.1 and later can autodetect the database name based on the virtual host name, and filter the name of the available databases: you need to start it with the --db-filter parameter, which represents a pattern used to filter the list of available databases. %h represents the domain name and %d is the first domain component of that domain. So for example with --db-filter=^%d$ I will only see the test database if I end up on the server using http://test.example.com:8069. If there's only one database match, the list is not displayed and the user will directly end up on the right database. This works even behind Apache reverse proxies if you make sure that OpenERP see the external hostname, i.e. by setting a X-Forwarded-Host header in your Apache proxy config and enabling the --proxy mode of OpenERP.
The port reuse problem comes because you are trying to start multiple OpenERP servers on the same interface/port combination. This is simply not possible unless you are careful to start just one server per IP with the IP set in the xmlrpc_interface parameter, and I don't think you need that. The named-based virtual hosts that Apache supports are all handled by a single master process that listens on port 80 on all the interfaces. If you want to do the same with OpenERP you only need to start one OpenERP server for all your domains, and make it listen on 0.0.0.0, port 8069, as I explained above.
On top of that it's not clear what you would have set differently in the various config files. Running 40 different OpenERP servers on the same machine with identical code sounds like a lot of overkill. OpenERP is designed to be multi-tenant so that many (read: hundreds) of databases can be served from the same server.
Finally I think this is the expected behaviour. The cookies of all websites are stored specifically for each website (for each domain) in the web browser. So if I only change the port the cookies of the first instance are in conflict with the cookies of the other instance because the have the same domain (111.222.33.44 in my example).
So there are some workarounds:
Change Domain Locally
Creating a couple of domain name in my laptop in /etc/hosts:
111.222.33.44 cloud01
111.222.33.44 cloud02
Then the cookies don't interfere with each other anymore. To access to each instance
http://cloud01:3333
http://cloud02:4444
Broswer Extension. Multilogin or Multiaccount
There is another workaround. If I use this chromium extension the problem disappears because the sessions are treated separately:
SessionBox
I am relatively new to javascript, and I got an uploader tool called fineuploader that I was considering to use. However locally (development machine) I got it to work (vb.net), but when I put it on my external server, I noticed that there is a post done in http and directly to the server's domain name (e/g/ mydomain.com), instead of mydomain.com/testproject. The site only allows for https traffic.
Is there an easy way to change this? (so it should point at https://mydomain.com/testproject/FileUpload.aspx
The code used by fineuploader shows a parameter called 'endpoint: '/FileUpload.aspx'
Do I have to make changes in the settings of IIS for this webservice?
If you need to enable CORS (cross-domain requests), Fine Uploader supports this. You should read my blog post on how CORS support is implemented in Fine Uploader and how you can support such requests in your server-side code.