Torquebox stomplet session empty - ruby-on-rails-4

I'm trying to implement user authentication for web sockets in Torquebox, and according to everything on the internet, I should be able to access the HTTP session from within a stomplet if I'm running the web app along side the stomp server, which I am.
My configuration looks something like this
web do
context '/'
host 'localhost'
end
stomp do
host 'localhost'
end
stomplet GlobalStomplet do
route '/live/socket'
end
I've tried also commenting out the web and stomp blocks but nothing changes.
Basically, the sockets are working, I can connect, and subscribe. In my stomplet, the on_subscribe method has a few debug lines
Rails.logger.debug "SESSION = #{subscriber.session}"
Rails.logger.debug "SESSION 2 = #{subscriber.getSession.getAttributeNames}"
Rails.logger.debug "SOCKET SESSION = #{TorqueBox::Session::ServletStore.load_session_data(subscriber.getSession)}"
And any other combination of these sort of things, but in every case I am given an empty session. The only exception, is when I explicitly load the session (as in the last debug line above) my session contains a session ID and something like TORQUEBOX_INITIAL_KEYS, but the session ID is not the HTTP session, and is simply something like session-1 and nothing useful.
I have an initialiser in the rails app setting up the torque box session store
MyApp::Application.config.session_store :torquebox_store, {
key: '_app_key'
}
I don't receive any exceptions from anything so I assume there are no obvious problems, but I've tried everything I can think of and still don't have a session that I can use.
What am I doing wrong?
I'm using Torquebox 3.1.0, Rails 4, and jRuby 1.7.11

Well, it seems I wasn't doing anything wrong per-se, but there seems to be an underlying bug in Torquebox (filing a bug report now)
It seems as though torque box web apps are quite happy with me assigning a custom key for the session store, and every works as expected. Unfortunately, it seems as though the stomplets are looking for the normal JSESSIONID only, and ignore the custom defined key.
To confirm, I remove the custom key, and it worked. I then reintroduced it, and again it stopped working. With the key still in place, I manually set the JSESSIONID cookie value, and reconnected and suddenly my session appeared.

Related

Keystone session cookie only working on localhost

Edit:
After investigating this further, it seems cookies are sent correctly on most API requests. However something happens in the specific request that checks if the user is logged in and it always returns null. When refreshing the browser a successful preflight request is sent and nothing else, even though there is a session and a valid session cookie.
Original question:
I have a NextJS frontend authenticating against a Keystone backend.
When running on localhost, I can log in and then refresh the browser without getting logged out, i.e. the browser reads the cookie correctly.
When the application is deployed on an external server, I can still log in, but when refreshing the browser it seems no cookie is found and it is as if I'm logged out. However if I then go to the Keystone admin UI, I am still logged in.
In the browser settings, I can see that for localhost there is a "keystonejs-session" cookie being created. This is not the case for the external server.
Here are the session settings from the Keystone config file.
The value of process.env.DOMAIN on the external server would be for example example.com when Keystone is deployed to admin.example.com. I have also tried .example.com, with a leading dot, with the same result. (I believe the leading dot is ignored in newer specifications.)
const sessionConfig = {
maxAge: 60 * 60 * 24 * 30,
secret: process.env.COOKIE_SECRET,
sameSite: 'lax',
secure: true,
domain: process.env.DOMAIN,
path: "/",
};
const session = statelessSessions(sessionConfig);
(The session object is then passed to the config function from #keystone-6/core.)
Current workaround:
I'm currently using a workaround which involves routing all API requests to '/api/graphql' and rewriting that request to the real URL using Next's own rewrites. Someone recommended this might work and it does, sort of. When refreshing the browser window the application is still in a logged-out state, but after a second or two the session is validated.
To use this workaround, add the following rewrite directive to next.config.js
rewrites: () => [
{
source: '/api/graphql',
destination:
process.env.NODE_ENV === 'development'
? `http://localhost:3000/api/graphql`
: process.env.NEXT_PUBLIC_BACKEND_ENDPOINT,
},
],
Then make sure you use this URL for queries. In my case that's the URL I feed to createUploadLink().
This workaround still means constant error messages in the logs since relative URLs are not supposed to work. I would love to see a proper solution!
It's hard to know what's happening for sure without knowing more about your setup. Inspecting the requests and responses your browser is making may help figure this out. Look in the "network" tab in your browser dev tools. When you make make the request to sign in, you should see the cookie being set in the headers of the response.
Some educated guesses:
Are you accessing your external server over HTTPS?
They Keystone docs for the session API mention that, when setting secure to true...
[...] the cookie is only sent to the server when a request is made with the https: scheme (except on localhost)
So, if you're running your deployed env over plain HTTP, the cookie is never set, creating the behaviour you're describing. Somewhat confusingly, in development the flag is ignored, allowing it to work.
A similar thing can happen if you're deploying behind a proxy, like nginx:
In this scenario, a lot of people choose to have the proxy terminate the TLS connection, so requests are forwarded to the backend over HTTP (but on a private network, so still relatively secure). In that case, you need to do two things:
Ensure the proxy is configured to forward the X-Forwarded-Proto header, which informs the backend which protocol was used originally request
Tell express to trust what the proxy is saying by configuring the trust proxy setting
I did a write up of this proxy issue a while back. It's for Keystone 5 (so some of the details are off) but, if you're using a reverse proxy, most of it's still relevant.
Update
From Simons comment, the above guesses missed the mark 😭 but I'll leave them here in case they help others.
Since posting about this issue a month ago I was actually able to work around it by routing API requests via a relative path like '/api/graphql' and then forwarding that request to the real API on a separate subdomain. For some mysterious reason it works this way.
This is starting to sound like a CORS or issue
If you want to serve your front end from a different origin (domain) than the API, the API needs to return a specific header to allow this. Read up on CORS and the Access-Control-Allow-Origin header. You can configure this setting the cors option in the Keystone server config which Keystone uses to configure the cors package.
Alternatively, the solution of proxying API requests via the Next app should also work. It's not obvious to me why your proxying "workaround" is experiencing problems.

Why can't I see my (localhost) cookie being stored in Electron app?

I have an Angular app using Electron as the desktop wrapper. And there's a separate Django backend which provides HTTP APIs to the Electron client.
So normally when I call the login API the response header will have a Set-Cookie field containing the sessionId. And I can clearly see that sessionId in Postman, however, I can't see this cookie in my Angular app (Dev tools of Electron).
After some further debugging I noticed a warning sign beside my Set-Cookie in dev tools. It said that the cookie is blocked due to the SameSite being set to Lax. So I found a way to modify the server code to return a None samesite (together with a Secure property; I'm using HTTP):
# settings.py
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_SAMESITE = 'None'
which did work (and the warning sign is gone) but the cookie is still not visible.
So what's the problem here? Why can't (and How can) I see that cookie so as to make sure that the login works in the actual client, not just Postman?
(btw, now both ends are being developed in localhost.)
There's no need to worry. A good way to check if it works is to actually make a request that requires login (after the API has been Postman tested) and see if the desired data are returned. If so, you are good to go (especially when the warning is gone).
If the sessionId cookie is saved it should automatically be included in the request. Unless there's something wrong with the cookie's path; but a / path would be fine.
Why is the cookie not visible: it's probably due to the separation of front and back ends. In Electron, the pages are typically some local HTML files, as one common step during configuration is to probably modify loadURL or something like that in main.js, for instance:
mainWindow.loadURL(`file://${__dirname}/dist/your-project/index.html`);
So the "site" you are accessing from Electron can be considered as local filesystem (which has no domain and hence no cookie at all), and you should see an empty file:// entry in dev tools -> application -> storage -> cookie. It doesn't mean a local path containing all cookies of the Electron app. Although your backend may be on the same local machine, you are accessing as http:// instead of file:// so the browser (Electron) will treat it as an actual web server.
Therefore, your cookies should be stored in another entry like http(s)://localhost and you can't see it in Electron. (Note that the same cookie will work in both HTTP and HTTPS)
If you use Chrome instead to test, you may be able to see it in all cookies. In some cases where the frontend and backend are deployed to the same host you may see the cookie in dev tools. But I guess there're always some reasons why you need Electron to create a desktop app (e.g. Python scripts).
Further reading
Using HTTPS
Although moving to HTTPS does not necessarily solve the original problem, it may be worth doing in order to prevent potential problems and get ready for the publish.
In your case, for the backend, you can use django-sslserver as a temporary solution before getting your SSL, but it uses a self-signed certificate and may make your frontend complain.
To fix this, consider adding the following code to the main process:
# const { app } = require('electron');
if (!app.isPackaged) {
app.commandLine.appendSwitch('ignore-certificate-errors');
}
Now it provides a good way to distinguish between development (unpacked) and production (packed) and only disables certificate check in development in order to make the code work.
Assuming that SESSION_COOKIE_SECURE in your config refers to cookie's secure flag, You ll have to set
SESSION_COOKIE_SECURE = False
because if this flag is set to True the browser will allow this cookie to be set only if you are using an https connection.
PS: This is just for your localhost. Hopefully you ll be using an Https connection in other environments.

How to monitor an action by user on Glass

I have a mirror API based app in which i have assigned a custom menu item, clicking on which should insert a new card. I have a bit of problem in doing that. I need to know of ways i can debug this.
Check if the subscription to the glass timeline was successful.
Print out something on console on click of the menu.
Any other way i can detect whether on click of the menu, the callback URL was called or not.
It sounds like you have a problem, but aren't sure how to approach debugging it? A few things to look at and try:
Question 1 re: checking subscriptions
The object returned from the subscriptions.insert should indicate that the subscription is a success. Depending on your language, an exception or error would indicate a problem.
You can also call subscriptions.list to make sure the subscriptions are there and are set to the values you expect. If a user removes authorization for your Glassware, this list will be cleared out.
Some things to remember about the URL used for subscriptions:
It must be an HTTPS URL and cannot use a self-signed certificate
The address must be resolvable from the public internet. "localhost" and local name aliases won't work.
The machine must be accessible from the public internet. Machines with addresses like "192.168.1.10" probably won't be good enough.
Question 2 re: printing when clicked
You need to make sure the subscription is setup correctly and that you have a webapp listening at the address you specified that will handle POST operations at that URL. The method called when that URL is hit is up to you, of course, so you can add logging to it. Language specifics may help here.
Try testing it yourself by going to the URL you specify using your own browser. You should see the log message printed out, at a minimum.
If you want it printed for only the specific menu item, you will need to make sure you can decode the JSON body that is sent as part of the POST and respond based on the operation and id of the menu item.
You should also make sure you return HTTP code 200 as quickly as possible - if you don't, Google's servers may retry for a while or eventually give up if they never get a response.
Update: From the sample code you posted, I noticed that you're either logging at INFO or sending to stdout, which should log to INFO (see https://developers.google.com/appengine/docs/java/#Java_Logging). Are you getting the logging from the doGet() method? This StackOverflow question suggests that appengine doesn't display items logged at INFO unless you change the logging.properties file.
Question 3 re: was it clicked or not?
Depending on the configuration of your web server and app server, there should be logs about what URLs have been hit (as noted by #scarygami in the comments to your question).
You can test it yourself to make sure you can hit the URL and it is logging. Keep in mind, however, the warnings I mentioned above about what makes a valid URL for a Mirror API callback.
Update: From your comment below, it sounds like you are seeing the URL belonging to the TimelineUpdateServlet is being hit, but are not seeing any evidence that the log message in TimelineUpdateServlet.doPost() is being called. What return code is logged? Have you tried calling this URL manually via POST to make sure the URL is going to the servlet you expect?

Cookie handling in subsequent requests in same page

I'm creating my own (multi threaded) ISAPI based website in C++ and I'm trying to implement correct session management.
The problem is that when a new session should be created, the session is created twice when using subsequent requests in the generated web page.
Here's how it works:
- Client requests http://localhost and sends either no cookie or a cookie with an old session ID in it.
- Server looks at the session cookie and feels that it needs to create a new one because it no longer exists: it prepares a header with a cookie in it with a new session ID and sends the complete header to the client (I tracked this with http live headers plugin in firefox and it is correct). It also prepares some data like the page and and stuff like that (not yet body data, as it is still processing data from the database and stuff like that) and sends what it has back to the client.
- Client should have this new session cookie now, and sees the stylesheet link and immediately sends the stylesheet request http://localhost/css to my server. But... he still does this with the old session ID for some reason, not with the newly received one!
- Server sees this request (with again an no longer existing session id), generates another new session and sends the new session id with a cookie along with the stylesheet data.
So the client has received two session id's now and will from now on keep using the second one as the first one is overwritten, but nevertheless the first page has used the wrong session (or actually, the second page has).
You could say that this is not a problem, but when I start using personalized stylesheets, I will have the wrong stylesheet on the first page and as the page will use AJAX to refresh the content (if available), it is possible that the stylesheet is never reloaded unless the client refreshes.
So, is this a problem that is always there when doing this kind of thing? Will the browser always send an old cookie although it has already received a new one but is still processing the page? Is this a problem that, for example PHP, also has?
Note: before all the discussions start about "use php instead" or something: I am rewriting a website that I had first written in PHP, it became popular, had thousands of (real) visitors every hour and started killing my server (the website doesn't make that kind of money that I can throw lots of servers at it). By writing it in C++, requests take 2ms instead of 200ms in PHP... I can optimize everything. By taking my time to develop this ISAPI correctly, it is safely multi-threaded and can be multi-processed, multi-servered. And most of all, I like the challenge.
Added note: It seems that the problem is only there when an old session exists in the cookies, because when I completely clear all cookies from my browser, and a new one is created and sent back to the client, the subsequent stylesheet request immediately uses the given session id. This seems to be some kind of proof that I'm doing something wrong when an old session id is sent... Should an existing cookie be deleted first? How?
Added note: The cookie is written with an expire-date one year ahead.
I have found out what the problem was, I was in the assumption that setting a cookie without specifying a path would result in making the session work on all paths on that domain.
By using http://foo.bar/home as main page and http://foo.bar/home/css as stylesheet, internally translating that url to ?s1=home and ?s1=home&css=y, I was actually using two different paths according to the browser which did not pass the cookie to the css-request.
For some reason, they actually got together afterwards, I don't fully understand why.
Isn't this silly? Will people not often have a http://foo.bar/index.php and a http://foo.bar/css/style.css.php , just because they use subdirectories to keep their structure clean?
If anyone knows of a way to fix it, to make subpaths also work with the same cookies, let me know, but as I understand it from the definition of cookies, they are stuck within a specific path (although, it seems that if you specifically add a path other than /, it will work on subdirectories as well?)

Django+apache: HTTPS only for login page

I'm trying to accomplish the following behaviour:
When the user access to the site by means of:
http://example.com/
I want him to be redirected to:
https://example.com/
By middleware, if user is not logged in, the login template is rendered when accessing /. If the user is logged, / is the main view. When the user logs in, I want the site working by http.
To do so, I am running the same server on ports 80 and 443 (is this really necessary? I have the impression that i'm running two separate servers with the same application while I want a server listening to two ports).
When the user navigates away from login, due to the redirection to http server the data in request.session is not present (altough it is present on https), thus showing that there is no user logged. So, considering the set up of apache is correct (running the same server on two different ports) I guess I have to pass the cookie from the server running on https over to http.
Can anybody shed some light on this? Thank you
First off make sure that the setting SESSION_COOKIE_SECURE is set to false. As long as the domains are the same the cookies on the browser should be present and so the session information should still be there.
Take a look at your cookies using a plugin. Search for the session cookie you have set. By default these cookies are named "sessionid" by Django. Make sure the domains and paths are in fact correct for both the secure session and regular session.
I want to warn against this however. Recently things like Firesheep have exploited an issue that people have known but ignored for a long time, that these cookies are not secure in any way. It would be easy for someone to "sniff" the cookie over the HTTP connection and gain access to the site as your logged in user. This essentially eliminates the entire reason you set up a secure connection to log in in the first place.
Is there a reason you don't have a secure connection across the entire site? Traditional arguments about it being more intensive on the server really don't apply with modern CPUs any longer and the exploits that I refer to above are becoming so prevalent that the marginal (really marginal) cost of encrypting all of your traffic is well worth it.
Apache needs to have essentially 2 different servers running because a.) it is listening on 2 different ports and b.) one is adding some additional encryption logic. That said this is a normal thing for Apache. I run servers with dozens of "servers" running on different ports and doing different logic. In the grand scheme of things, this shouldn't really weight your server down.
That said once you pass the same request to *WSGI or mod_python, you will then have to have logic to make sure that no one tries to log in over your non-encrypted connection because the only difference to Django will be the response in request.is_secure(). All the URLs and views in your urlconf will be accessible.
Whew that is a lot. I hope that helps.