like does not work for our URL - facebook-like

I have a problem with the likebutton. As soon as a user clicks the like-button the comment box appears for a (very) short time and then disappears again without giving the user the ability to like.
It is related to the URL the like button refers to. If I put another URL into the iframe it works. Why is it not possible to like our website (www.comex.eu)? Has anyone an idea?
Thanks for your answers.

Facebook sometimes prevents a Like from registering if it believes the site is likejacking / malicious etc. Or even if too many Likes are coming from a specific IP address. Give it some time (1-2 days) and it should start working (eventually).
Alternatively, try Liking the site using a different account or different IP address. But don't keep Liking / Disliking it or Facebook's system will think it's spam.

Related

Tracking unauthenticated users in Django

I need to track unregistered users in my Django website. This is for conversion optimization purposes (e.g. registration funnel, etc).
A method I've used so far is using IP address as a proxy for user_id. For various well-known reasons, this has led to fudged/unreliable results.
Can I sufficiently solve my problem via setting a session variable at server-side? An illustrative example would be great.
For example, currently I have a couple of ways in my head. One is doing request.session["temp_id"] = random.randint(1,1000000), and then tracking based on temp_id.
Another is setting a session variable every time an unauthorized user hits my web app's landing page, like so:
if not request.session.exists(request.session.session_key):
request.session.create()
From here on, I'll simply track them via request.session.session_key. Would this be a sound strategy? What major edge-cases (if any) do I need to be aware of?
Cookies are the simplest approach, but take into consideration that some users can have cookies turned off in their browsers.
So for those users you can use javascript local storage to set some data. This information will get deleted once you close the browser, but it's ok for funneling purposes. Still others can have javascript turned off.
Another approach would be to put custom data(key) in every link of the page when generating the template. in other words you would have the session_id stored in html page and send through url parameters at click. Something similar happens with csrf token. Look into that.

Cookie not kept when moving from html to perl page

One of my clients uses Sellerdeck as their shopping cart solution. I am currently implementing a service for them that relies heavily on cookies.
The cookie is set on a product page which has a URI that is something like http://www.mydomain.co.uk/retail/acatalog/A11-Insect-Net.html. When I browse around the site, I can see the cookie set on all pages, like it is supposed to.
Then when I go into the checkout process, Sellerdeck apparently starts using perl, because the URI changes to something like http://www.mydomain.co.uk/cgi-bin/retail/ca001000.pl. The weird thing is that, although we're still on the same domain, I can't see the cookie. When I go back to the product pages it is there again.
Doe anyone know why this may be?
Turns out the cookie was tied to a specific path and not to /. Fixed now.

Parallel website running to my original website

We have been working on a gaming website. Recently while making note of the major traffic sources I noticed a website that I found to be a carbon-copy of our website. It uses our logo,everything same as ours but a different domain name. It cannot be, that domain name is pointing to our domain name. This is because at several places links are like ccwebsite/our-links. That website even has links to some images as ccwebsite/our-images.
What has happened ? How could have they done that ? What can I do to stop this ?
There are a number of things they might have done to copy your site, including but not limited to:
Using a tool to scrape a complete copy of your site and place it on their server
Use their DNS name to point to your site
Manually re-create your site as their own
Respond to requests to their site by scraping yours real-time and returning that as the response
etc.
What can I do to stop this?
Not a whole lot. You can try to prevent direct linking to your content by requiring referrer headers for your images and other resources so that requests need to come from pages you serve, but 1) those can be faked and 2) not all browsers will send those so you'd break a small percentage of legitimate users. This also won't stop anybody from copying content, just from "deep linking" to it.
Ultimately, by having a website you are exposing that information to the internet. On a technical level anybody can get that information. If some information should be private you can secure that information behind a login or other authorization measures. But if the information is publicly available then anybody can copy it.
"Stopping this" is more of a legal/jurisdictional/interpersonal concern than a technical one I'm afraid. And Stack Overflow isn't in a position to offer that sort of advice.
You could run your site with some lightweight authentication. Just issue a cookie passively when they pull a page, and require the cookie to get access to resources. If a user visits your site and then the parallel site, they'll still be able to get in, but if a user only knows about the parallel site and has never visited the real site, they will just see a crap ton of broken links and images. This could be enough to discourage your doppelganger from keeping his site up.
Another (similar but more complex) option is to implement a CSRF mitigation. Even though this isn't a CSRF situation, the same mitigation will work. Essentially you'd issue a cookie as described above, but in addition insert the cookie value in the URLs for everything and require them to match. This requires a bit more work (you'll need a filter or module inserted into the pipeline) but will keep out everybody except your own users.

Facebook Send Button: 'Sorry, something went wrong'

I'm implementing a Facebook Send dialog, by opening facebook.com/dialog/send in a popup window. When I click the 'Send' button the dialog will display the error: 'Sorry, something went wrong' – but only for certain link URLs. The best examples I have are:
Not Working:
https://www.facebook.com/dialog/send
?app_id=12345
&name=Example
&link=facebook.com/examplepage
&redirect_uri=http://www.example.com/response
Working:
https://www.facebook.com/dialog/send
?app_id=12345
&name=Example
&link=google.com
&redirect_uri=http://www.example.com/response
These links are identical except that the first one shares 'facebook.com' and the second one shares 'google.com'. Only the second one works.
The same errors occur if I use the Facebook JavaScript API with FB.ui({method: 'send'}).
Answering my own question:
Both the links now work for me, although I have not changed anything. I can only hypothesise that Facebook has either fixed this bug, or this issue occurs on some internet connections and not others.
I think Facebook is trying to read the URL and pre-populate the image and content. In your case probably because skiggle.com.au is a redirected to the other domain and facebook doesn't accept it.
Edit. Your first link works though
I had the same problem & worked out that this was due to linking to a Facebook page that did not have a vanity URL set up (i.e. http://www.facebook.com/CubicMushroom rather than http://www.facebook.com/profile.php?id=261963707177053). If you set up a vanity URL for the page it seems to work OK (providing you link to the vanity URL version of the page URL).
To claim a vanity URL, once you have a certain no of like (it used to be 25, but think it's a little lower now) visit https://www.facebook.com/username/
This can also happen if you share a link to localhost

Making CAPTCHA temporarily sticky for a user?

I've a forum where anonymous is allowed to post, protected by CAPTCHA. For users convenience, I set a Cookie for such a user which lasts about a month so the user does not get the CAPTCHA over and over again. In the simplest form the cookie is called no_captcha_for_one_month and it's value is 1. When the user returns and posts anonymously, he gets not CAPTCHA.
Anyone seeing the flaw? A forum spammer just needs to fill out the CAPTCHA correctly once and use the cookie information for his bot and there he goes.
I thought about getting creative and using a server-side hash which includes e.g. users IP address and some secret salt to generate the cookie value, but it would still be valid for this IP address, of course.
Someone I get the impression the question is silly and I try to solve something unsolvable.
I would recommend implementing your cookie value + salt implementation not to solve your problem but for security reasons. As explained by this blog post wordpress had a similar, albeit it much more severe, problem due to poor cookie security. In your case a determined spammer could always bypass your CAPTCHA even if the cookie had expired.
In order to solve the proposed problem the only solution that is coming to my mind would be to implement a Forced CAPTCHA algorithm that would override your newly secured cookie if it felt the user was being spammy. Off the top of my head I would use attributes like time since last post, number of posts today, the length of time it took to compose the message on the form, etc.
Edit: I should also mention that you can make your forum less attractive to spammers in the first place by implementing the rel="nofollow" attribute on user submitted links. See Wikipedia.
with such a solution it is always possible to use the cookie for a bot. no matter what you try.
As said below, a cookie can easily be taken from a browser and pasted in a bot code, so the solution isn't robust.
Other solutions:
Find some users posting a lot in the forum and ask them if they are volunteer to be moderator. A forum like the AutoHotkey one uses this system, and this works fine. Spammers tend to avoid active forums where moderation is fast and efficient. They prefer dead forums...
Limit the number of anonymous posts per IP address. Can be annoying for users, but can avoid spam flooding. Should be set up only if you experience such flooding.
Even worse, because you are using a cookie, the spammer doesn't even need to do the CAPTCHA once. Cookies can be changed by the client, they are sent by the browser with the page request, so the client can send whatever it wants. In fact spam requests would come from a script, so it's even easier to fabricate the cookies.
Storing the variable server side sill solve the problem I've mentioned; You set a random hash as the cookie, and have a table that stores the CAPTCHA status on the server. For the spammer to get no CAPTCHA, they would have to guess a hash that has the correct variable stored server side, shich is very hard to do.
The problem you mentioned; the fact that once a month might not be long enough to deter a spammer, you can't get around that. You have to show a CAPTCHA to every real user, as often as you want the spammer to enter one as well. Remember, a CAPTCHA is necessary because you can't tell a spammer from a normal user.
You should have the CAPTCHA show often, it will convince people to sign up anyway.
Encrypt the time (in pico or nano seconds) set it as a input value () & set it in your DataBase with a column name 'hash'
set that in every page & see if it matches the DB.