Does anyone know whether I can set a session value for the current domain, and use this session for another domain?
For example:
when I set session in domain www.aabc.com and I wish this session to work in domain www.ccc.com as well -- I click a button in www.aabc.com and change the header to www.ccc.com?
You can only set cookies for your domain (and other sites on your domain, like subdomains, if I remember correctly).
This is (mainly ?) for security reasons : else, anyone could set cookies for any website... I let you imagine the mess ^^
(The only way to set cookies for another domain seem to be by exploiting a browser's security hole - see http://en.wikipedia.org/wiki/Cross-site_cooking for instance ; so, in normal cases, not possible -- happily)
I had to set this up at my last job. The way it was handled was through some hand-waving and semi-secure hash passing.
Basically, each site, site A and site B, has an identical gateway setup on each domain. The gateway accepts a user ID, a timestamp, a redirect URL, and a hash. The hash is comprised of a shared key, the timestamp, the user ID.
Site A generates the hash and sends all of the information listed above to the gateway at site B. Site B then hashes the received passed user ID and timestamp with the shared key.
If the generated hash matches the received hash, then the gateway logs the user in and loads their session from a shared memory table or memcached pool and redirects the user to the received redirect url.
Lastly, the timestamp is used to be able to determine an expiration time for the provided passed hash (e.g.: the hash was only valid for x time). Something around 2.5 minutes is what we used for our TTL (to account for network lag and perhaps a refresh or two).
The key points here are:
Having a shared resource where sessions can be serialized
Using a shared key to create and confirm hashes (if you're going to use md5, do multiple passes)
Only allow the hash to be valid for a small, but reasonable amount of time.
This requires control of both domains.
Hope that was helpful.
You cannot access both domains sessions directly, however, there are legitimate solutions to passing session data between two sites that you control. For data that can be tampered with you can simply have a page on domain abc.com load a 1px by 1px "image" on xyz.com and pass the appropriate data in the querystring. This is very insecure so make sure the user can't break anything by tampering with it.
Another option is to use a common store of some kind. If they have access to the same database this can be a table that domain abc.com stores a record in and then passes the id of the record to domain xyz.com. This is a more appropriate approach if you're trying to pass login information. Just make sure you obfuscate the ids so a user can't guess another record id.
Another approach to the common store method if these two domains are on different servers or cannot access the same database is to implement some sort of cache store service that will store information for a time and is accessible by both domains. Domain abc.com passes in some data and the service passes back an ID that domain abc.com sends to domain xyz.com which then turns back to the service asking for the data. Again, if you develop this service yourself make sure you obfuscate the ids.
Related
I have a resource called Sites.
I am planning to have an endpoint as follows:
/tenant/:tenantId/sites/:siteId
The endpoint is to return a site’s tree which will vary based on the userId extracted from the JWT token.
Since it will vary based on the user requesting it, should the endpoint have userId in the URI- may be as a query parameter?
How should caching work in this case?
The sites tree returned by this endpoint will also change based on updates in another resource (i.e users/groups)
Should the cache be discarded for all users whenever there is a change in the sites resource itself or when there is a change in groups?
I am using API gateway so will need to purge cache through client cache control header when any of the resources are updated.
Since, the data will vary on the user requesting it, the endpoint should have the userId in the URI - it could be simply a path parameter similar to the tenantId and siteId
caching can be done on the basis of If-modified-since header to indicate if the data has changed or not.
The If-Modified-Since HTTP header indicates the time for which a browser first downloaded a resource from the server. This helps determine whether or not the resource has changed since the last time it was accessed.
From a security point of view if a user only can access his own sites the user id should not be on the path (or query param), because if you do that, any user can modify the URL in its browser and try to access the other user sites. To avoid that the URL should not have any userId (you can replace it with something like /me) and the service that will handle the request should extract the id information from the token
I don't know if you are using an in-memory cache of distributed cache and if sites/users/groups are different services (deployed on different servers) or if they live in the same application, anyway, when any of the resources that cache depends on are modified, you should invalidate the cache for that users
My web application's authentication mechanism currently is quite simple.
When a user logs in, the website sends back a session cookie which is stored (using localStorage) on the user's browser.
However, this cookie can too easily be stolen and used to replay the session from another machine. I notice that other sites, like Gmail for example, have much stronger mechanisms in place to ensure that just copying a cookie won't allow you access to that session.
What are these mechanisms and are there ways for small companies or single developers to use them as well?
We ran into a similar issue. How do you store client-side data securely?
We ended up going with HttpOnly cookie that contains a UUID and an additional copy of that UUID (stored in localStorage). Every request, the user has to send both the UUID and the cookie back to the server, and the server will verify that the UUID match. I think this is how OWASP's double submit cookie works.
Essentially, the attacker needs to access the cookie and localStorage.
Here are a few ideas:
Always use https - and https only cookies.
Save the cookie in a storage system (nosql/cache system/db) and set it a TTL(expiry).
Never save the cookie as received into the storage but add salt and hash it before you save or check it just like you would with a password.
Always clean up expired sessions from the store.
Save issuing IP and IP2Location area. So you can check if the IP changes.
Exclusive session, one user one session.
Session collision detected (another ip) kick user and for next login request 2 way authentication, for instance send an SMS to a registered phone number so he can enter it in the login.
Under no circumstances load untrusted libraries. Better yet host all the libraries you use on your own server/cdn.
Check to not have injection vulnerabilities. Things like profiles or generally things that post back to the user what he entered in one way or another must be heavily sanitized, as they are a prime vector of compromise. Same goes for data sent to the server via anything: cookies,get,post,headers everything you may or may not use from the client must be sanitized.
Should I mention SQLInjections?
Double session either using a url session or storing an encrypted session id in the local store are nice and all but they ultimately are useless as both are accessible for a malicious code that is already included in your site like say a library loaded from a domain that that has been highjacked in one way or another(dns poison, complomised server, proxies, interceptors etc...). The effort is valiant but ultimately futile.
There are a few other options that further increase the difficulty of fetching and effectively using a session. For instance You could reissue session id's very frequently say reissue a session id if it is older then 1 minute even if you keep the user logged in he gets a new session id so a possible attacker has just 1 minute to do something with a highjacked session id.
Even if you apply all of these there is no guarantee that your session won't be highjacked one way or the other, you just make it incredibly hard to do so to the point of being impractical, but make no mistake making it 100% secure will be impossible.
There are loads of other security features you need to consider at server level like execution isolation, data isolation etc. This is a very large discussion. Security is not something you apply to a system it must be how the system is built from ground up!
Make sure you're absolutely not vulnerable to XSS attacks. Everything below is useless if you are!
Apparently, you mix two things: LocalStorage and Cookies.
They are absolutely two different storage mechanisms:
Cookies are a string of data, that is sent with every single request sent to your server. Cookies are sent as HTTP headers and can be read using JavaScript if HttpOnly is not set.
LocalStorage, on the other hand, is a key/value storage mechanism that is offered by the browser. The data is stored there, locally on the browser, and it's not sent anywhere. The only way to access this is using JavaScript.
Now I will assume you use a token (maybe JWT?) to authenticate users.
If you store your token in LocalStorage, then just make sure when you send it along to your server, send it as an HTTP header, and you'll be all done, you won't be vulnerable to anything virtually. This kind of storage/authentication technique is very good for Single-page applications (VueJS, ReactJS, etc.)
However, if you use cookies to store the token, then there comes the problem: while token can not be stolen by other websites, it can be used by them. This is called Cross-Site Request Forgery. (CSRF)
This kind of an attack basically works by adding something like:
<img src="https://yourdomain.com/account/delete">
When your browser loads their page, it'll attempt to load the image, and it'll send the authentication cookie along, too, and eventually, it'll delete the user's account.
Now there is an awesome CSRF prevention cheat sheet that lists possible ways to get around that kind of attacks.
One really good way is to use Synchronizer token method. It basically works by generating a token server-side, and then adding it as a hidden field to a form you're trying to secure. Then when the form is submitted, you simply verify that token before applying changes. This technique works well for websites that use templating engines with simple forms. (not AJAX)
The HttpOnly flag adds more security to cookies, too.
You can use 2 Step Authentication via phone number or email. Steam is also a good example. Every time you log in from a new computer, either you'll have to mark it as a "Safe Computer" or verify using Phone Number/Email.
I have two web applications different things, but authentication is done only by one (using python and tornado), id like to have the second application access the credential of the user transparently, currently I can read the cookie of a logged in user via the header: Access-Control-Allow-Credentials , so how would i access the cookie, so i can store it (mongodb/redis/anywhere-but-mysql), and retrieve it in the second app?
what I've tried:
self.set_secure_cookie('cookie_name') # works i can see the cookie in subsequent request headers
self.get_secure_cookie("cookie_name") # just after setting the cookie returns None
what I was thinking is to store the encrypted value and compare it later in the second application as and when needed, is this sensible? all that i need to do is to ensure the user is
logged in and they exist in out list of users as of the moment.
So you've managed to set a cookie by one of the servers and then retrieve it on the second? If so, great! That's the trickiest part (imho).
Now there are two ways to go.
Store data in the cookie
Tornado have, as you've noticed, support for secure cookies. This basically mean that you can store data in the cookie and sign it with a secret. If both you servers have the same secret they can verify that the cookie data is not altered and you have successfully distributed data between the two servers. This decentralised alternative is not suitable if you need to store much data in the session.
A shared DB (or an API that the other server can use)
If you go with this solution you just have to store a session key in the cookie. No need to use secure cookie since it's no data stored there. You simply generate a SSID, e.g. ssid = uuid.uuid4().hex, store that in a cookie called something like ssid and also add a record to the DB along with all session data you want to store. I really like Redis for this since you can set the expire on creation and don't have to worry about that anymore, it's pretty fast and the best thing is that there's a nice and easy async lib you can use that plays nice with tornado.
I have one web app which under the login process stores the userId in a http session variable(After confirmation of course!). I'm not using any session variables other than this one to retrieve information about the user. I don't know if this one is the most scalable solution for me yet. Do my server reserve any memory for this? Is it better to use cookies instead?
If you are using multiple application servers (now or in the future), I believe the http session variable is dependent to the server the user is on (correct me if I'm wrong), so in this case, you can find a "sticky session" solution that locks the user to a particular server (e.g. EC2's Load Balancers offer this: http://aws.amazon.com/about-aws/whats-new/2010/04/08/support-for-session-stickiness-in-elastic-load-balancing/ ).
I recommend using a cookie (assuming my logic above is right), but you should make sure you have some sort of security measure on that so users can't change their cookie and gain access to another user's account. For example, you could hash some string w/ a secret key and the user ID which you check server-side to confirm it has not been tampered with.
If I want my account holders to be able to have their own sub-domains and even their own domains altogether. Using NGINX as my proxy server, should I create domains for each one in my NGINX conf and have my clients point their domains there or is there reasons why this would be bad? Also, if I do that, how can I pass account-specific (account in Django DB) information in with the request (ie, request from www.spamfoosaccount.com to my server, so I proxy the request back to Apache, but how does my application know that it came from spamfoo's account unless I look at request.HTTP_HOST (which might be the best way, but I don't know until I ask). Thanks in advance.
To know from which domain a request is coming from, you have to use request.META["HTTP_HOST"].
However, do not rely on this value for authentication, it can be forged easily. Authentication should be done in the usual way with django.contrib.session. A request from a specific domain/subdomain should not have more privileges/rights, even when the request contains an authenticated session. Privileges should be given to users/groups of users, not to domains.
Note that browser sessions cannot cross second-level-domains (e.g. session cookie from foo.com wil not be sent to bar.com), it can however be a *.foo.com cookie for all subdomains (if you explicitly set it so).
Let your users point their DNS records to the IP of your server, let NGINX route the request based on the domain to your backend and do normal authentication in Django.
Your question:
how does my application know that it
came from spamfoo's account
I don't know the specifics of your application, but it shouldn't matter where the request came from, but who issued the request (e.g. an authenticated user). You should have a model/field that links your users to their respective domains. When a user is linked to only one domain, the application should assume the user came from that domain. When a user is connected to more than one domain, you can look at request.META["HTTP_HOST"]. If this value matches any of the domains, the user is linked to, it's alright, the value may be forged, but by a user that is linked to that domain nonetheless.