Uploading multiple files in Liferay in application with two nodes - cookies

I'm using Liferay 6.1.0 GA1.
My applications runs on two tomcats. I have varnish in front of them. Varnish redirect to particular node when cookie is set on it.
When I'm trying to upload multiples files on Firefox, it loses this cookie (on Chrome it works just fine).
My idea was, to extend URL - add parameter that can later be filtered in Varnish. But I cannot find where should I add this, that Flash can later copy this properly.
Any other ideas that will be helpful are welcome as well.
P.S. Sorry for bad english.

"Loosing a cookie" means that it explicitly is set to another value, or the hostname changes. I suggest you use Firebug or the built-in Developer tools (hit F12) and monitor the requests and responses that go through the line. Pay attention to Set-Cookie directives in the response headers as well as the Host directive in the request headers. This should give some hints where they're going.
It's hard to give more specific advice with the level of detail you provide.

Related

Internet Explorer 8 not saving cookies from my local dev version

Looking at my website in IE8 in windows XP, cookies work fine. But, when i connect to my local dev version over the local network, it's not saving cookies. I'm connecting via an entry in my c:\WINDOWS\system32\drivers\etc\hosts file.
I'm not that familiar with the intricacies of IE8's security settings. Could there be something that the live site does, which the local version doesn't, which means the local version is failing some security test and thus not getting its cookies saved? IE8 is on the default "Medium-High" security setting. I've tried changing it to "Medium" (the lowest) and get the same problem.
When i say it's not saving the cookies, i mean that i'm looking in the IE8 dev tools/Cache/View Cookie Information page, and it's totally empty (apart from the site url) - there's no cookies saved at all.
Hoping to get some pointers on this, i don't really know where to start trying to fix it.
thanks, max
Ok, figured it out - the hostname i was using had an underscore in it. When i changed it to a hyphen it worked fine.
thanks for reading!

Is someone trying to hack my Django website

I have a website that I built using Django. Using the settings.py file, I send myself error messages that are generated from the site, partly so that I can see if I made any errors.
From time to time I get rather strange errors, and they seem to mostly be around about the same area of the site (where I wrote a little tutorial trying to explain how I set up a Django Blog Engine).
The errors I'm getting all appear like something I could have done in a typo.
For example, these two errors are very close together. I never had an 'x' or 'post' as a variable on those pages.
'/blog_engine/page/step-10-sub-templates/{{+x.get_absolute_url+}}/'
'/blog_engine/page/step-10-sub-templates/{{+post.get_absolute_url+}}/'
The user agent is:
'HTTP_USER_AGENT': 'Mozilla/5.0 (compatible; Purebot/1.1; +http://www.puritysearch.net/)',
Which I take it is a scraper bot, but I can't figure out what they would be able to get with this kind of attack.
At the risk of sounding stupid, what should I do? Is it a hack attempt or are they simply trying to copy my site?
Edit: I'll follow the advice already given, but I'm really curios as to why someone would run a script like this. Are they just trying to copy. It isn't hitting admin pages or even any of the forms. It would seem like harmless (aside from potential plagiarism) attempts to dig in and find content?
From your USER_AGENT info it looks like this is a web spider from puritysearch.net.
I suggest you do is put a CAPTCHA code in you website. Program it to trigger when something tries to access 10 pages in 10 seconds (mostly no humans would do this or figure out a proper criteria to trigger your CAPTCHA).
Also, maintain robots.txt file which most crawlers honor. Mention your rules in robots.txt. You can say the crawlers to keep off certain busy sections of your site etc.
If the problem persists, you might want to contact that particular site's system admin & try to figure out what's going on.
This way you will not be completely blocking crawlers (which are needed for your website to become popular) and at the same time you are making sure that your users get fast experience on your site.
Project HoneyPot has this bot listed as a malicious one http://www.projecthoneypot.org/ip_174.133.177.66 (check the comments there) and what you should probably do is ban that IP and/or Agent.

Is it possible to be attacked with XSS on a static page (i.e. without PHP)?

A client I'm working for has mysteriously ended up with some malicious scripting going on on their site. I'm a little baffled however because the site is static and not dynamically generated - no PHP, Rails, etc. At the bottom of the page though, somebody opened a new tag and a script. When I opened the file on the webserver and stripped the malicious stuff and re-uploaded, it was still there. How is this possible? And more importantly, how can I combat this?
EDIT:
To make it weirder, I just noticed the script only shows up in the source if the page is accessed directly as 'domain.com/index.html' but not as just 'domain.com'.
EDIT2:
At any rate, I found some php file (x76x09.php) sitting on the web server that must have been updating the html file despite my attempts to strip it of the script. I'm currently in the clear but I do have to do some work to make sure rogue files don't just appear again and cause problems. If anyone has any suggestions on this feel free to leave a comment, otherwise thanks for the help everyone! It was very much appreciated!
No it's not possible unless someone has access to your files. So in your case someone has access to your files.
Edit: It's best if you ask in serverfault.com regarding what to do in case the server is compromised, but:
change your shell passwords
have a look at /var/log/messages for login attempts
finger root
have a look at last modification time of those files
There is also a high propability that the files where altered via http by using a vulnerability of a software component you use together with the static files.
To the point about the site not having pages executing on the server, XSS is absolutely still possible using a DOM based attack. Usually this will relate to JavaScript execution outputting content to the page. Just last week WhiteHat Security had an XSS vulnerability identified on a purely “static” page.
It may well be that the attack vector relates to file level access but I suggest it’s also worthwhile taking a look at what’s going on JS wise.
You should probably talk to your hosting company about this. Also, check that your file permissions aren't more lenient than they should be for your particular environment.
That's happened to me before - this happens if they get your ftp details. So, whoever did it, obviously got ahold of your ftp details somehow.
Best thing to do is change your password and contact your webhosting company to figure out a better solution.
Unfortunately, FTP isn't the most secure...

In Django, can I always force browser and provider caches to load new pages with a global setting?

I have a handful of users on a server. After updating the site, they don't see the new pages. Is there a way to globally force their browsers and providers to display the new page? Maybe from settings.py? I see there are decorators that look like they do this on a function level.
Depends on browser and cache settings.
There may be no way to tell browsers to do so (as pages are cached, they are not even talking to server, so there is nothing You can do there).
Good trick is to set Vary: Cookie header, so You can always invalidate cache (by changing cookie somewhere) in case of need.
One way to force the browser to load a new page rather than loading the cached version is to change the file name. You could add a date/time to the file name and use a rewrite rule (assuming Apache web server here) to get the new page.
This site gives a quick explanation: http://www.askapache.com/htaccess/mod_rewrite-fix-for-caching-updated-files.html
and google will show many more.
you may also have to examine your cache control headers.

Besides URL rewriting, what options are available for maintaining sessions without using cookies?

I've seen various options for URL rewriting here on Stack Overflow, and other places on the web, but was curious to see if there were other options.
This is speculation, as Cookies and URL Rewriting are the big two, but technologically, I think it'd be possible to:
do some massive hackery with javascript that captures all links and submits a form with information.
track the session on the server based on IP
Both have their downsides and holes obviously.
Session variables? At work, we are not allowed to use non session-cookies without a load of permissions.
You can either maintain state through a cookie or through a query parameter. The browser needs to be able to pass data to the web server somehow and those are the only two options.
I suppose that would depend on what technology you are using. In ColdFusion you can maintain session variables without cookies.
Using a client-side database storage, such as Google Gears (sqlite) ? Html5 is expected to include one (webkit already does it).