So I've recently made this Website and they wanted a message at the top of the Website to let everyone know they were still updating the Website and should expect changes to the site. Although I made this message whenever anyone closes the message and then refreshes the page the message pops back up and I don't want this to happen.
How do I go about storing their IP Address so when they close the message it never re-appears.
I couldn't find the answer anywhere but if it has already been asked if you could point me in the direction of the answer that would be great. Thanks :)
As far as I know, there isn't a way to do that with just HTML.
The way I would tackle it is to use simple JavaScript to store/check for a cookie and display/hide a message respectively.
the cookie is part of the document object in JavaScript, if you're not familiar with JavaScript, or just looking for some explanations then feel free to Google "JavaScript Cookies" or check this link for a simple tutorial from w3schools.
Also, just to explain why I would choose cookies over storing IP addresses, this approach (cookies) does the job on the front-end, which transfers the overhead into the client's machine. While storing IP addresses (or tackling the problem in a similar manner) requires a server-side solution, which means slighlty more things for your server to do, and a lot more for you to do (You'll need to code that in PHP or so in order to implement it on the back-end).
I hope my answer get you on the right track.
Good luck with your project.
P.S some countries' regulations require you to display a disclaimer of some sort just to let the customer know that "Our website uses Cookies to store information on your device", make sure you cover that base in case you go for Cookies.
Related
I use Safari and Firefox. Since Safari doesn't offer the cookie handling I prefer, I frequently invoke a separate tool to clear out most of the cookies. Neither Kayak nor Google are on my exceptions list. A couple of times recently, I cleaned out cache, local storage, flash, etc. to get rid of the so-called zombie cookies (while no webkit clients were running). I was in USA when I did this.
In spite of this, every time I access a Google site, it redirects to google.es and every time I access Kayak.com, it redirects to kayak.uk
I understand about browser fingerprint, but I have occasionally changed a few things that affect that. And even if that weren't enough, there is nothin to hint that they have identified me specifically. I would expect that in the absence of any completely unique identifier, they would assume the country of the IP address(es), which for over four weeks have been Comcast and ATT. (And various public WiFi sites.)
It's not log in—I don't have a login for Kayak and I never log in to Google unless absolutely necessary.
With kayak, I changed it (menu lower right corner of search page) to USA/dollars, but as soon as I go to another site without deleting cookies, when I come back, it's on UK/pounds again.
What might be the cause of this? There are a few other sites behaving similarly.
I think I understand it now. Someone correct me if I missed something.
It was not cookies or tracking or HTML redirect.
It was the browser "remembering" that I had visited kayak.co.uk and google.es while over there and "fixing" the URL for me.
I am creating a application using loopback.i am facing a problem to manage session.When apps login, session is created. when i reload the page session is not present on client side but it is on server side. Please tell me how to manage session on client side. And how to send response from server to client. Sorry for english but I am trouble.
Please tell me about it.
Thanks in advance.
You can inject $sessionStorage in angular and use it to preserve the session information that you get back from loopback.
But I believe that loopback already has the ability to store the access token in the browser's localStorage, so it is preserved across page reloads and browser (hybrid mobile app) restarts. So I'm not sure why it gets lost for you ... or maybe that's not what you mean by "page session"? Feel free to clarify.
You can see an example of logging in and then saving the user info to browser here: https://github.com/ShoppinPal/warehouse/blob/f03abc632ac01682e938e58db868290fb6e33083/client/app/scripts/controllers/login.js#L35-L42
If you ever find yourself in a similar situation again, try searching for code on github.com as there is some chance that you might find what you're looking for in an open-source project.
For example, you can get decent hints if you searched for user model sessionStorage path:/client/app where user model sessionStorage are keywords to look for and path:/client/app represents (more or less) the standardized directory structure for loopback (path:/client/js is another common path to try) ... it is generally worth limiting your search with it ... this helps narrow down thousands of search results into double digits. I do admit however that it doesn't always work because if you didn't know to look for the sessionStorage keyword then the search would have been quite fruitless ;)
We have been working on a gaming website. Recently while making note of the major traffic sources I noticed a website that I found to be a carbon-copy of our website. It uses our logo,everything same as ours but a different domain name. It cannot be, that domain name is pointing to our domain name. This is because at several places links are like ccwebsite/our-links. That website even has links to some images as ccwebsite/our-images.
What has happened ? How could have they done that ? What can I do to stop this ?
There are a number of things they might have done to copy your site, including but not limited to:
Using a tool to scrape a complete copy of your site and place it on their server
Use their DNS name to point to your site
Manually re-create your site as their own
Respond to requests to their site by scraping yours real-time and returning that as the response
etc.
What can I do to stop this?
Not a whole lot. You can try to prevent direct linking to your content by requiring referrer headers for your images and other resources so that requests need to come from pages you serve, but 1) those can be faked and 2) not all browsers will send those so you'd break a small percentage of legitimate users. This also won't stop anybody from copying content, just from "deep linking" to it.
Ultimately, by having a website you are exposing that information to the internet. On a technical level anybody can get that information. If some information should be private you can secure that information behind a login or other authorization measures. But if the information is publicly available then anybody can copy it.
"Stopping this" is more of a legal/jurisdictional/interpersonal concern than a technical one I'm afraid. And Stack Overflow isn't in a position to offer that sort of advice.
You could run your site with some lightweight authentication. Just issue a cookie passively when they pull a page, and require the cookie to get access to resources. If a user visits your site and then the parallel site, they'll still be able to get in, but if a user only knows about the parallel site and has never visited the real site, they will just see a crap ton of broken links and images. This could be enough to discourage your doppelganger from keeping his site up.
Another (similar but more complex) option is to implement a CSRF mitigation. Even though this isn't a CSRF situation, the same mitigation will work. Essentially you'd issue a cookie as described above, but in addition insert the cookie value in the URLs for everything and require them to match. This requires a bit more work (you'll need a filter or module inserted into the pipeline) but will keep out everybody except your own users.
I've got a webservice which is executed through javascript (jquery) to retrieve data from the database. I would like to make sure that only my web pages can execute those web methods (ie I don't want people to execute those web methods directly - they could find out the url by looking at the source code of the javascript for example).
What I'm planning to do is add a 'Key' parameter to all the webmethods. The key will be stored in the web pages in a hidden field and the value will be set dynamically by the web server when the web page is requested. The key value will only be valid for, say, 5 minutes. This way, when a webmethod needs to be executed, javascript will pass the key to the webmethod and the webmethod will check that the key is valid before doing whatever it needs to do.
If someone wants to execute the webmethods directly, they won't have the key which will make them unable to execute them.
What's your views on this? Is there a better solution? Do you forsee any problems with my solution?
MORE INFO: for what I'm doing, the visitors are not logged in so I can't use a session. I understand that if someone really wants to break this, they can parse the html code and get the value of the hidden field but they would have to do this regularly as the key will change every x minutes... which is of course possible but hopefully will be a pain for them.
EDIT: what I'm doing is a web application (as opposed to a web site). The data is retrieved through web methods (+jquery). I would like to prevent anyone from building their own web application using my data (which they could if they can execute the web methods). Obviously it would be a risk for them as I could change the web methods at any time.
I will probably just go for the referrer option. It's not perfect but it's easy to implement. I don't want to spend too much time on this as some of you said if someone really wants to break it, they'll find a solution anyway.
Thanks.
Well, there's nothing technical wrong with it, but your assumption that "they won't have the key which will make them unable to execute them" is incorrect, and thus the security of the whole thing is flawed.
It's very trivial to retrieve the value of a hidden field and use it to execute the method.
I'll save you a lot of time and frustration: If the user's browser can execute the method, a determined user can. You're not going to be able to stop that.
With that said, any more information on why you're attempting to do this? What's the context? Perhaps there's something else that would accomplish your goal here that we could suggest if we knew more :)
EDIT: Not a whole lot more info there, but I'll run with it. Your solution isn't really going to increase the security at all and is going to create a headache for you in maintenance and bugs. It will also create a headache for your users in that they would then have an 'invisible' time limit in which to perform actions on pages. With what you've told us so far, I'd say you're better off just doing nothing.
What kind of methods are you trying to protect here? Why are you trying to protect them?
ND
MORE INFO: for what I'm doing, the visitors are not logged in so I can't use a session.
If you are sending a client a key that they will send back every time they want to use a service, you are in effect creating a session. The key you are passing back and forth is functionally no different than a cookie (expect that it will be passed back only on certain requests.) Might as well just save the trouble and set a temporary cookie that will expire in 5 minutes. Add a little server side check for expired cookies and you'll have probably the best you can get.
You may already have such a key, if you're using a language or framework that sets a session id. Send that with the Ajax call. (Note that such a session lasts a bit longer than five minutes, but note also it's what you're using to keep state for the users regular HTPP gets and posts.)
What's to stop someone requesting a webpage, parsing the results to pull out the key and then calling the webservice with that?
You could check the referrer header to check the call is coming from one of your pages, but that is also easy to spoof.
The only way I can see to solve this is to require authentication. If your webpages that call the webservice require the user to be logged in then you can check the that they're logged in when they call the webservice. This doesn't stop other pages from using your webservice, but it does let you track usage more and with some rate limiting you should be able to prevent abuse of your service.
If you really don't want to risk your webservice being abused then don't make it public. That's the only failsafe solution.
Let's say that you generate a key valid from 12.00 to 12.05. At 12.04 i open the page, read it with calm, and at 12.06 i trigger action which use your web service. I'll be blocked from doing so even i'm a legit visitor.
I would suggest to restrain access to web services by http referrer (allow only those from your domain and null referrers) and/or require user authentication for calling methods.
Why don't people use CFLOGIN? I remember having problem with it with CF7 some months ago, but I couldn't remember what was wrong with it.
I use cflogin all the time and it works great. It can be a little tricky to get working the way you like, but the benefits are huge. Being able to fine tune your application with user roles takes care of the bulk of my rights based customization. There used to be some issues with session management that made it difficult to work with. Turning on j2ee sessions seems to make most of those issues go away.
Some of the popular frameworks are not compatible with cflogin, so that might be one reason you don't see a lot of it. They tend to have their own approach to securing application features.
I think a lot of people get frustrated with it because it is a little quirky and they give up on it. Others have more complicated security needs that aren't addressed completely by cflogin, so they wind up writing their own system. Specifically, there isn't an easy way to deal with rights by content asset.
The only issue I've had is with roles in CF8. It's brilliantly implemented, and a little cruel that it doesn't work as it quite should. Maybe in CF9.
In any event, building your own roles based system (assign the user a session variable with a comma separated list of access levels that the system can check against) isn't too hard to do and I got over it.
The one nice thing about cfLogin that is probably still worth using is how it ties into the Server monitor to see how many people are logged in, etc.
The point above about using the jsession is true, it's worth doing in all cf apps. One of the best things I dragged myself through to get working how I wanted it.
CFLogin is not used for 3 reasons.
First, it's a little touchy, a little strange, and doesn't work how many would think. You put some code here, and if a user isn't logged in it runs it... that's just odd, you know? It didn't help that there were some bugs early on, either.
Second, while it has the basic required security features for a web application, it doesn't go any further. You can't really extend it easily. Who's to say that's how everybody wants it?
Third, and most realistically, it's because people have already solved that problem. The problem area of securing an application, authentication and authorization has been thought out in the community long enough and most people know how to just do it. CFLogin is reinventing the door. It is too little, too late.
Now, that's not to say that no one uses it. I personally have used it a few times with basic success, but no reason to ring a bell. For most of my applications, it makes more sense to not use CFLogin. The problem domains are this way or that, and CFLogin doesn't always solve it in the most intelligent way.
Do keep in mind that CFLOGIN has a catch with Basic HTTP Auth where it can continue to send its UserID and Password even after you have called CFLOGOUT.
I know this has driven some advanced users away from it.
Here is an excerpt from LiveDocs
Caution: If you use web server-based
authentication or any form
authentication that uses a Basic HTTP
Authorization header, the browser
continues to send the authentication
information to your application until
the user closes the browser, or in
some cases, all open browser windows.
As a result, after the user logs out
and your application uses the cflogout
tag, until the browser closes, the
cflogin structure in the cflogin tag
will contain the logged-out user's
UserID and password. If a user logs
out and does not close the browser,
another user might access pages with
the first user's login.
In my case (suppose for some other people too) the main reason is moving from other platform, say PHP. I mean that I've already got some knowledge and habits in ACL development and started using them in CF.
I know how to make it handy for user, flexible for developer and secure and don't really need to switch to cflogin.
Sometimes the same happens with other stuff, say in most cases I prefer to implement client-side validation using own JS instead of using cfform/cfinput.
Because it (still!) has serious bugs, like this one:
http://www.raymondcamden.com/index.cfm/2009/8/7/Watch-out-for-this-CFLOGIN-Bug