cross domain cooking handling when same app server serves both domains - cookies

I understand that there are a number of ways/hacks to implement cross domain cookies such as iframe, redirects etc. I believe those methods are necessary when different app servers are serving each domain.
Now if both domains are served by the same app server, would there be an efficient and best practice method for handling these cookies? Could the app server in this case, just keep track of the origin and determine which users each request is associated to regardless of what target domain is being requested?
Any input would be greatly appreciated.
Bob

Cookies are how a server knows who's talking to it, so having both domains on the same server doesn't really help. When the request comes in, you have the source IP:port, user agent, cookies, and that's about it. IP isn't useful because of NAT (multiple users, one IP) and mobile (one user, multiple IPs--moving from cellular to wifi or vice versa). User agent has similar problems. The answers discussed in Cross-Domain Cookies are still the best options available.

Unfortunately, there's still not the super-direct way to share user data across domains. I found that the iframe implementation was the most re-usable.
To this end, I created an NPM module to simplify cross-domain sharing. It gives you a function to produce an iframe with a whitelist of your domains, and get/set functions that let you access that iframe from any whitelisted domain.
https://www.npmjs.com/package/cookie-toss
Hope this helps!

Related

Change cookie of domain B from domain A , they are on the same dedicated server

I have a dedicated server, which hosts 2 domains, let's say A and B.
I know about the cross-origin policy, that is impossible to read the cookies from another domain, but is it somehow possible to do it when they have the same hardware/owner/server/file system, etc?
I have realized an interesting thing, I have ads on domain A, and there are a lot of cookies from ad networks. How they did do it? Can I do something similar with my two domains?
Thanks for the answers/suggestions, I have really no experience in this sphere.
when you run advertisements on your domain, you include some JavaScript code provided by these ad networks.
When the script is executed, the script can create and read cookies associated with domain from where the script came from.

Do I always have to buy a right to use a (unspecific but fixed) domain?

I am new to this topic and was just watching a tutorial and the dude said you would have to buy and monthly pay for a domain.
I get that you would want someone else to host your website for IT-security reasons. Which is really not expensive.
But say I want to just access my server data like my music, my images and videos from anywhere and I know how to make a website. The domain name is not important for me, I dont need it to be fancy. Do I always have to buy a right to use a (unspecific but fixed) domain?
How does it work? Thanks!
Edit1: (to specify) I read that hosting the website yourself is not safe. I want to let someone host my website, serve data (like images and videos) to this website or to the client from my home-server.
A domain is just an entry in the worldwide DNS servers. This makes it easier to find your server(s). You do not need to have one. Instead you can use your IP address that you 'get' from your ISP. You must make sure that your router, that you got from your ISP, directs the request to your server.
An other means is that you find a free redirect service like dyndns. They give you a servername that will redirect automatically to your IP address given by your ISP.
If you let someone host your website then he will provide you with a URL under which you can access your server. In fact this is not a domain but a server in his domain. Hosting your website that runs on your desktop can be unsafe. If you use a dedicated cheap server than it is less unsafe but complete safety is, unfortunately, not possible.

https vs signed url with Cloudfront

I know this is an apples and oranges question but I'd like to understand the pros and cons of using https and signed urls with AWS Cloudfront. Might people please comment on and add to this list?
HTTPS
PROS
Security: https is more secure than http. Though, I'm not sure what this mean b/c if you can't trust that the URL is actually from Amazon, who can you trust?
Preserve your application's status quo: Your site is already fully https for another reason, like you handle credit cards. Using https for cloudfront prevents alerting the user that you are serving insecure content, i.e., the dreaded "yellow" indicator symbol. Could this also be a con if you're site is fully http (honest question)?
Degree of difficulty: 0/10. Just change http to https in your url, it works either way out of the box. On the other hand, if you want to use your own CNAME with https, this seems significantly more confusing, 7/10, though I haven't tried it due to con #1 below...
CONS
Cost: $600/month !! to use https with own CNAME, e.g., images.mysite.com instead of blah123.cloudfront.com. On the other hand, my understanding is that using CNAMEs with http is free?
SIGNED URLS
PROS
REAL security: signed urls would seem the most commonly needed method to control who has access to your site's content. You can control things like the user IP address and the time duration of who has access.
Cost: none
CONS
Degree of difficulty: 9/10. Creating signed urls is relatively confusing. There's lots of terminology to learn and possibly some libraries not part of the AWS SDK you'll need to track down.
HTTPS helps secure data in transit, which is helpful if you are already using SSL for access to your application. With the CNAME issue, most people are likely not going to realize that your images and other static content are being delivered from cloudfront.net instead of yourdomain.com
Signing URLs only helps control who can access a given file and how long they can access it for. You may use this for delivery digital purchases, or other private files to logged in users. You also loose some of the caching benefit of cloudfront.

Are cookies safe in a Heroku app on herokuapp.com?

I am developing an app, which I will deploy on Heroku. The app is only used within an iframe on another site, so I don't care about the domain name. I plan to deploy my app on example.herokuapp.com instead of using a custom domain on example.com.
My app uses cookies, and I want to be sure that others cannot manipulate my cookies to protect my app against session fixation and similar attacks. If attacker.herokuapp.com is able to set a cookie for herokuapp.com, browsers will not be able to protect me, since herokuapp.com is not a public suffix. See http://w2spconf.com/2011/papers/session-integrity.pdf for a detailed description of the issue.
My question is: When browsers can't protect my users, will Heroku do it by blocking cookies for herokuapp.com?
Just wanted to post an update for anyone who ran across this question as I did. I was working on a similar problem, except that I wanted to purposefully allow access to the same cookie from two different heroku apps.
"herokuapp.com" and "herokussl.com" are now on the Public Suffix List, so your cookies should be safe if they are set for one of those domains. I ended up having to use custom domains in order to share cookies across both apps.
Heroku also released an article on the topic: https://devcenter.heroku.com/articles/cookies-and-herokuapp-com
I just tried to add a cookie from my Heroku app with the response header Set-Cookie: name=value;Path=/;Domain=.herokuapp.com, and to my disappointment, I could see the header intact in my browser. So the Heroku infrastructure does not detect and remove this cross-app supercookie.
I see three possible ways to protect a Heroku app against cross-app supercookies:
Don't use cookies at all.
Use a custom domain.
Verify that each cookie was actually set by your app, and restrict it to the client's IP address by checking the X-Forwarded-For header.
My feature request to Heroku would be that they should filter HTTP responses that goes through their HTTP routing, such that applications hosted on their infrastructure cannot set cookies with Domain=herokuapp.com.
It seems to me that, as long as you set the cookie for example.herokuapp.com, then the cookie is safe from manipulation. The cookie will only be presented to the app running on example.herokuapp.com and to herokuapp.com (where no app runs).

Django+apache: HTTPS only for login page

I'm trying to accomplish the following behaviour:
When the user access to the site by means of:
http://example.com/
I want him to be redirected to:
https://example.com/
By middleware, if user is not logged in, the login template is rendered when accessing /. If the user is logged, / is the main view. When the user logs in, I want the site working by http.
To do so, I am running the same server on ports 80 and 443 (is this really necessary? I have the impression that i'm running two separate servers with the same application while I want a server listening to two ports).
When the user navigates away from login, due to the redirection to http server the data in request.session is not present (altough it is present on https), thus showing that there is no user logged. So, considering the set up of apache is correct (running the same server on two different ports) I guess I have to pass the cookie from the server running on https over to http.
Can anybody shed some light on this? Thank you
First off make sure that the setting SESSION_COOKIE_SECURE is set to false. As long as the domains are the same the cookies on the browser should be present and so the session information should still be there.
Take a look at your cookies using a plugin. Search for the session cookie you have set. By default these cookies are named "sessionid" by Django. Make sure the domains and paths are in fact correct for both the secure session and regular session.
I want to warn against this however. Recently things like Firesheep have exploited an issue that people have known but ignored for a long time, that these cookies are not secure in any way. It would be easy for someone to "sniff" the cookie over the HTTP connection and gain access to the site as your logged in user. This essentially eliminates the entire reason you set up a secure connection to log in in the first place.
Is there a reason you don't have a secure connection across the entire site? Traditional arguments about it being more intensive on the server really don't apply with modern CPUs any longer and the exploits that I refer to above are becoming so prevalent that the marginal (really marginal) cost of encrypting all of your traffic is well worth it.
Apache needs to have essentially 2 different servers running because a.) it is listening on 2 different ports and b.) one is adding some additional encryption logic. That said this is a normal thing for Apache. I run servers with dozens of "servers" running on different ports and doing different logic. In the grand scheme of things, this shouldn't really weight your server down.
That said once you pass the same request to *WSGI or mod_python, you will then have to have logic to make sure that no one tries to log in over your non-encrypted connection because the only difference to Django will be the response in request.is_secure(). All the URLs and views in your urlconf will be accessible.
Whew that is a lot. I hope that helps.