I have searched to find a way that I can clear cookies from a browser for a specific site just by appending parameters to an end of a URL. I assume that if this even existed it would be browser dependent. Due to not finding anything within a 30 minute search, I assume that this doesn't exist for any browser. I hope that
It is an odd situation why I need this. I use BigCommerce, and I need to clear out everything from my cart. I spent yesterday afternoon trying to find a BigCommerce arguement that allowed for this. After talking with support, they don't have a clearCart function that other shopping cart platforms has. They only have a clear item function/parameter that you can append to the end of a URL.
If I could clear all the cookies created by my site by a user clicking on a link, it would clear the contents of the cart. I have tried to hack the clear item parameters, but never could find a way to clear all items.
The clear item URL is like this: mysite.com/cart.php?action=remove&item=52fa8fd1e398b
Speaking generally first, no - there is no part of specification or browser implementations that supports arbitrarily clearing all cookies from a particular domain through the use of http GET parameters. This would be a horrible security hole if it did exist in either specification or practice. If it was possible, I could maliciously redirect your browser to some known site and destroy information that you did not want destroyed.
Now to your particular situation ... I gather that what you want to do is not so much clear the cookies from your own browser in this particular site, but rather, you are trying to implement a "clear cart" button on your own BigCommerce site. Is that right?
If so, cookies is probably not the path forward that you are seeking. I don't think BigCommerce exposes shopping cart functions through their api. But I would be interested to hear from the BigCommerce folks on that.
Your best bet to add a "clear cart" function is to add javascript to the page that iterates all cart items and makes that http call you mentioned. If you do this, make sure to have some graceful fallback for the case where they change the uri. :)
Related
One of my clients uses Sellerdeck as their shopping cart solution. I am currently implementing a service for them that relies heavily on cookies.
The cookie is set on a product page which has a URI that is something like http://www.mydomain.co.uk/retail/acatalog/A11-Insect-Net.html. When I browse around the site, I can see the cookie set on all pages, like it is supposed to.
Then when I go into the checkout process, Sellerdeck apparently starts using perl, because the URI changes to something like http://www.mydomain.co.uk/cgi-bin/retail/ca001000.pl. The weird thing is that, although we're still on the same domain, I can't see the cookie. When I go back to the product pages it is there again.
Doe anyone know why this may be?
Turns out the cookie was tied to a specific path and not to /. Fixed now.
I would like Akamai not to cache certain URLs if a specified cookie exist (i.e) If user logged in on specific pages. Is there anyway we can do with Akamai?
The good news, is that I have done exactly this in the past for the Top Gear site (www.topgear.com/uk). The logic goes that if a cookie is present (in this case "TGCACHEKEY") then the Akamai cache is to be bypassed for certain url paths. This basically turns off Akamai caching of html pages when logged in.
The bad news is that you require an Akamai consultant to make this change for you.
If this isn't an option for you, then Peter's suggestions are all good ones. I considered all of these before implementing the cookie based approach for Top Gear, but in the end none were feasible.
Remember also that Akamai strips cookies for cached resources by default. That may or may not effect you in your situation.
The Edge Server doesn't check for a cookie before it does the request to your origin server and I have never seen anything like that in any of their menus, conf screens or documentation.
However, there are a few ways I can think of that you can get the effect that I think you're looking for.
You can specify in the configuration settings for the respective digital property what path(s) or URL(s) you don't want it to cache. If you're talking about a logged on user, you might have a path that only they would get to or you could set up such a thing server side. E.g. for an online course you would have www.course.com/php.html that anybody could get to whereas you might use www.course.com/student/php-lesson-1.html for the actual logged on lessons content. Specifying that /student/* would not be cached would solve that.
If you are serving the same pages to both logged on and not-logged on users and can't do it that way, you could check server-side if they're logged on and if so add a cache-breaker to the links so when they follow a link a cache-breaker is automatically added. You could also do this client side if you want, but it would be more secure and faster to do it server-side. As a note on this, this could be userid-random#. That would keep it unique enough when combined with the page that nobody else would request it and get the earlier 'cache-broken' page.
If neither of the above are workable, there is one other way I can think of, which is a bit unconventional to say the least, but it would work. Create symbolically linked directory in your document root with another name so that you can apply the first option and exempt it from cacheing. Then you check if the guy is logged on and if so prepend the extra directory to the links. From akamai's point of view www.mysite.com/logged-on/page.html can be exempt from cache where www.mysite.com/content/page.html is cached. On your server if /logged-on/ symbolically links over to /content/ then you're all set.
When they login you could send them to a subdomain which is set up as a ServerAlias, so on your side it's the same, but on Akamai has differnt cache handling rules.
Following the same answer than #llevera, you can use the cookies on CloudFlare without intervention of engineers to make the change for you.
Having that sort of cookies to bypass content is a technique that its becoming more popular with the time, and even bug companies like Magento are using it for Magento 2 platform.
But solutions from above still valid, Maybe Akamai supports that that already now, we are in 2017!
I'm looking for a clean way to let search engine spiders bypass #login_required, viewing pages which typically require a logged-in user. I could write middleware that would automatically log search engines into a dummy account, but that's not exactly what I'd call clean. Any suggestions for a better solution? Thanks.
Don't do this. This is 'cloaking', and can get you banned from Google's index.
Cloaking refers to the practice of presenting different content or URLs to users and search engines. Serving up different results based on user agent may cause your site to be perceived as deceptive and removed from the Google index.
Cloaking: http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Instead, you need to implement Google's First Click Free solution. In this setup, the first click from a Google search result is able to see the full content, subsequent clicks are trapped. This can be done on a referrer basis, or a cookie basis. You can read more about First Click Free here:
First Click Free: http://www.google.com/support/webmasters/bin/answer.py?answer=74536
Why would you want to do this? If search engines can see the pages, then anyone can see them without being logged in, because the information would surface on the search engine's results page. In any case, the only way to identify a spider or bot is by its user agent string, which is trivial to spoof.
I don't get it. in "#login_required" you have an important word: "required". if it's "required", it's for a good reason. It means that, in order to see the webpage, your credentials are mandatory. Because the content is private, secret, etc.
If you want to make your pages to be available via search engines, you have to make them public, and thus, login is not required anymore. And so, your view should not be protected by the #login_required decorator.
Maybe your problem lies beyond the availabilty of your pages. Maybe your content is actually made to be public, and your views should not be protected by this decorator. Maybe the only thing you need is to load the public part for every user (logged or anonymous) and eventually load the private bits if your user is identified.
Otherwise, leaving a backdoor for spiders is definitely a bad idea, because your private content won't be private anymore.
I've a forum where anonymous is allowed to post, protected by CAPTCHA. For users convenience, I set a Cookie for such a user which lasts about a month so the user does not get the CAPTCHA over and over again. In the simplest form the cookie is called no_captcha_for_one_month and it's value is 1. When the user returns and posts anonymously, he gets not CAPTCHA.
Anyone seeing the flaw? A forum spammer just needs to fill out the CAPTCHA correctly once and use the cookie information for his bot and there he goes.
I thought about getting creative and using a server-side hash which includes e.g. users IP address and some secret salt to generate the cookie value, but it would still be valid for this IP address, of course.
Someone I get the impression the question is silly and I try to solve something unsolvable.
I would recommend implementing your cookie value + salt implementation not to solve your problem but for security reasons. As explained by this blog post wordpress had a similar, albeit it much more severe, problem due to poor cookie security. In your case a determined spammer could always bypass your CAPTCHA even if the cookie had expired.
In order to solve the proposed problem the only solution that is coming to my mind would be to implement a Forced CAPTCHA algorithm that would override your newly secured cookie if it felt the user was being spammy. Off the top of my head I would use attributes like time since last post, number of posts today, the length of time it took to compose the message on the form, etc.
Edit: I should also mention that you can make your forum less attractive to spammers in the first place by implementing the rel="nofollow" attribute on user submitted links. See Wikipedia.
with such a solution it is always possible to use the cookie for a bot. no matter what you try.
As said below, a cookie can easily be taken from a browser and pasted in a bot code, so the solution isn't robust.
Other solutions:
Find some users posting a lot in the forum and ask them if they are volunteer to be moderator. A forum like the AutoHotkey one uses this system, and this works fine. Spammers tend to avoid active forums where moderation is fast and efficient. They prefer dead forums...
Limit the number of anonymous posts per IP address. Can be annoying for users, but can avoid spam flooding. Should be set up only if you experience such flooding.
Even worse, because you are using a cookie, the spammer doesn't even need to do the CAPTCHA once. Cookies can be changed by the client, they are sent by the browser with the page request, so the client can send whatever it wants. In fact spam requests would come from a script, so it's even easier to fabricate the cookies.
Storing the variable server side sill solve the problem I've mentioned; You set a random hash as the cookie, and have a table that stores the CAPTCHA status on the server. For the spammer to get no CAPTCHA, they would have to guess a hash that has the correct variable stored server side, shich is very hard to do.
The problem you mentioned; the fact that once a month might not be long enough to deter a spammer, you can't get around that. You have to show a CAPTCHA to every real user, as often as you want the spammer to enter one as well. Remember, a CAPTCHA is necessary because you can't tell a spammer from a normal user.
You should have the CAPTCHA show often, it will convince people to sign up anyway.
Encrypt the time (in pico or nano seconds) set it as a input value () & set it in your DataBase with a column name 'hash'
set that in every page & see if it matches the DB.
I've got a webservice which is executed through javascript (jquery) to retrieve data from the database. I would like to make sure that only my web pages can execute those web methods (ie I don't want people to execute those web methods directly - they could find out the url by looking at the source code of the javascript for example).
What I'm planning to do is add a 'Key' parameter to all the webmethods. The key will be stored in the web pages in a hidden field and the value will be set dynamically by the web server when the web page is requested. The key value will only be valid for, say, 5 minutes. This way, when a webmethod needs to be executed, javascript will pass the key to the webmethod and the webmethod will check that the key is valid before doing whatever it needs to do.
If someone wants to execute the webmethods directly, they won't have the key which will make them unable to execute them.
What's your views on this? Is there a better solution? Do you forsee any problems with my solution?
MORE INFO: for what I'm doing, the visitors are not logged in so I can't use a session. I understand that if someone really wants to break this, they can parse the html code and get the value of the hidden field but they would have to do this regularly as the key will change every x minutes... which is of course possible but hopefully will be a pain for them.
EDIT: what I'm doing is a web application (as opposed to a web site). The data is retrieved through web methods (+jquery). I would like to prevent anyone from building their own web application using my data (which they could if they can execute the web methods). Obviously it would be a risk for them as I could change the web methods at any time.
I will probably just go for the referrer option. It's not perfect but it's easy to implement. I don't want to spend too much time on this as some of you said if someone really wants to break it, they'll find a solution anyway.
Thanks.
Well, there's nothing technical wrong with it, but your assumption that "they won't have the key which will make them unable to execute them" is incorrect, and thus the security of the whole thing is flawed.
It's very trivial to retrieve the value of a hidden field and use it to execute the method.
I'll save you a lot of time and frustration: If the user's browser can execute the method, a determined user can. You're not going to be able to stop that.
With that said, any more information on why you're attempting to do this? What's the context? Perhaps there's something else that would accomplish your goal here that we could suggest if we knew more :)
EDIT: Not a whole lot more info there, but I'll run with it. Your solution isn't really going to increase the security at all and is going to create a headache for you in maintenance and bugs. It will also create a headache for your users in that they would then have an 'invisible' time limit in which to perform actions on pages. With what you've told us so far, I'd say you're better off just doing nothing.
What kind of methods are you trying to protect here? Why are you trying to protect them?
ND
MORE INFO: for what I'm doing, the visitors are not logged in so I can't use a session.
If you are sending a client a key that they will send back every time they want to use a service, you are in effect creating a session. The key you are passing back and forth is functionally no different than a cookie (expect that it will be passed back only on certain requests.) Might as well just save the trouble and set a temporary cookie that will expire in 5 minutes. Add a little server side check for expired cookies and you'll have probably the best you can get.
You may already have such a key, if you're using a language or framework that sets a session id. Send that with the Ajax call. (Note that such a session lasts a bit longer than five minutes, but note also it's what you're using to keep state for the users regular HTPP gets and posts.)
What's to stop someone requesting a webpage, parsing the results to pull out the key and then calling the webservice with that?
You could check the referrer header to check the call is coming from one of your pages, but that is also easy to spoof.
The only way I can see to solve this is to require authentication. If your webpages that call the webservice require the user to be logged in then you can check the that they're logged in when they call the webservice. This doesn't stop other pages from using your webservice, but it does let you track usage more and with some rate limiting you should be able to prevent abuse of your service.
If you really don't want to risk your webservice being abused then don't make it public. That's the only failsafe solution.
Let's say that you generate a key valid from 12.00 to 12.05. At 12.04 i open the page, read it with calm, and at 12.06 i trigger action which use your web service. I'll be blocked from doing so even i'm a legit visitor.
I would suggest to restrain access to web services by http referrer (allow only those from your domain and null referrers) and/or require user authentication for calling methods.