Why don't people use <CFLOGIN>? - coldfusion

Why don't people use CFLOGIN? I remember having problem with it with CF7 some months ago, but I couldn't remember what was wrong with it.

I use cflogin all the time and it works great. It can be a little tricky to get working the way you like, but the benefits are huge. Being able to fine tune your application with user roles takes care of the bulk of my rights based customization. There used to be some issues with session management that made it difficult to work with. Turning on j2ee sessions seems to make most of those issues go away.
Some of the popular frameworks are not compatible with cflogin, so that might be one reason you don't see a lot of it. They tend to have their own approach to securing application features.
I think a lot of people get frustrated with it because it is a little quirky and they give up on it. Others have more complicated security needs that aren't addressed completely by cflogin, so they wind up writing their own system. Specifically, there isn't an easy way to deal with rights by content asset.

The only issue I've had is with roles in CF8. It's brilliantly implemented, and a little cruel that it doesn't work as it quite should. Maybe in CF9.
In any event, building your own roles based system (assign the user a session variable with a comma separated list of access levels that the system can check against) isn't too hard to do and I got over it.
The one nice thing about cfLogin that is probably still worth using is how it ties into the Server monitor to see how many people are logged in, etc.
The point above about using the jsession is true, it's worth doing in all cf apps. One of the best things I dragged myself through to get working how I wanted it.

CFLogin is not used for 3 reasons.
First, it's a little touchy, a little strange, and doesn't work how many would think. You put some code here, and if a user isn't logged in it runs it... that's just odd, you know? It didn't help that there were some bugs early on, either.
Second, while it has the basic required security features for a web application, it doesn't go any further. You can't really extend it easily. Who's to say that's how everybody wants it?
Third, and most realistically, it's because people have already solved that problem. The problem area of securing an application, authentication and authorization has been thought out in the community long enough and most people know how to just do it. CFLogin is reinventing the door. It is too little, too late.
Now, that's not to say that no one uses it. I personally have used it a few times with basic success, but no reason to ring a bell. For most of my applications, it makes more sense to not use CFLogin. The problem domains are this way or that, and CFLogin doesn't always solve it in the most intelligent way.

Do keep in mind that CFLOGIN has a catch with Basic HTTP Auth where it can continue to send its UserID and Password even after you have called CFLOGOUT.
I know this has driven some advanced users away from it.
Here is an excerpt from LiveDocs
Caution: If you use web server-based
authentication or any form
authentication that uses a Basic HTTP
Authorization header, the browser
continues to send the authentication
information to your application until
the user closes the browser, or in
some cases, all open browser windows.
As a result, after the user logs out
and your application uses the cflogout
tag, until the browser closes, the
cflogin structure in the cflogin tag
will contain the logged-out user's
UserID and password. If a user logs
out and does not close the browser,
another user might access pages with
the first user's login.

In my case (suppose for some other people too) the main reason is moving from other platform, say PHP. I mean that I've already got some knowledge and habits in ACL development and started using them in CF.
I know how to make it handy for user, flexible for developer and secure and don't really need to switch to cflogin.
Sometimes the same happens with other stuff, say in most cases I prefer to implement client-side validation using own JS instead of using cfform/cfinput.

Because it (still!) has serious bugs, like this one:
http://www.raymondcamden.com/index.cfm/2009/8/7/Watch-out-for-this-CFLOGIN-Bug

Related

Ajax login in Django

I want to do my log in stuff with an ajax request so that the page doesn't reload. So instead of using a form with method="POST", I will just make a post request with the email and password field values.
What are the upsides and downsides to this? How do I ensure security of the credentials? Please let me know if you have any questions.
Normally linking to external resources isn't ideal here, but in this case the broad nature of your question, and the exact fit of a specific external resource, makes me want to recommend that you read:
https://code.djangoproject.com/wiki/AJAX
It's provided by the Django community to answer this sort of question, and contains links to popular AJAX libraries and tutorials. Your question didn't provide much in the way of specifics, but I'd imagine at least one of those tutorials matches your situation.
It doesn't specifically address security, but that's a very broad topic: to get a proper answer here you'd have to ask a more specific question.
However, if you want to take security seriously, I'd highly recommend trying to understand OWASP (Open Web Security Project) top ten attacks: https://owasp.org/www-project-top-ten/
If you simply understand those ten attacks, and how to defend against them, you can protect a Django-based site ... or a site based on any other framework ... because properly protecting your site transcends framework-specific concerns.
Also it's worth noting that some of those attacks can apply to non-AJAX sites also, so it's a great read even if you don't adopt AJAX.

Facebook Client-side authentication and offline_access deprecation

Before you start yelling at me, I know many users already asked for something like this, but I read all of them and couldn't find any reply related to my specific case: I eventually managed to get something working but it's not what I think I (and other developers) are looking for. I want to share my experience about this with all of you, so I'll try and describe my scenario and the steps I followed to look into how to take care of this, so please indulge me for this long post: I'm sure it will help some developers in the same situation as I am to clear their minds too, just as I hope it will give others the right information to help me (and others) with it.
I wrote a native Android application that makes use of the Facebook API. I DO NOT make use of the Facebook SDK, because I don't want to rely on the official app being installed on the device (as a matter of fact, my app is in part an alternative to that app so it would be silly to need it installed anyway in the first place), but I rather issue Graph API calls directly via HTTP and handle the responses myself. So if that is the answer you're thinking of giving me, please don't because I won't take that road.
As such, I made use of the Client-side authentication to authorize my app, displaying the URL in a WebView and getting the access_token at the end. I requested offline_access among the other permissions.
Since offline_access is going to be deprecated in May, I started investigating how to get long lived tokens anyway, and so read almost everything I could find related to that, including of course the official guidelines. Long story short, nothing worked for me, and I'm still stuck with very short-lived access_tokens that I can do nothing about.
This is what I did to begin:
Deprecated the offline_access for my app (well not THE app since it's being used by many users right now, but another one which is basically the same and I use for testing purposes only so that's the same thing) in the settings.
Authorized a user using Client-side authentication: https://www.facebook.com/dialog/oauth?client_id=MY_APP_ID&redirect_uri=http://my.domain.com/yeah.htmlscope=publish_stream,read_stream,user_photos,friends_photos,offline_access&response_type=token&display=wap
I got my access_token, but I immediately noticed how it was not long-lived at all, quite the opposite: expires_in was set to something like 6800 seconds (less than two hours). So the first assumption I had made (access_tokens will be longer lived by default) was already wrong.
I looked into how this access_token lifetime could be extended then, and tried almost every alternative out there. Needless to say, every attempt failed. That's what I tried, to be precise:
First of all, I of course tried the "official" approach, that is extending the token through the new endpoint. Skipping for now the rant about how stupid it is to request the client secret for such an operation (as many folks already pointed out, such secret would need to be embedded in the Android app, which is a security nightmare as far as we developers are concerned, and moving this bit server-side to extend the token life on behalf of the user is a nightmare for what concerns them instead, since they'd need to trust me with messing with their access_token), I tried issuing a GET request to that address using the correct parameters: https://graph.facebook.com/oauth/access_token?client_id=APP_ID&client_secret=APP_SECRET&grant_type=fb_exchange_token&fb_exchange_token=EXISTING_ACCESS_TOKEN ...The request was apparently successful, but it did NOT extend the lifetime of anything. The request just returned the same access_token as before, with an expires_in parameter that just reflected the sand of time flowing away (the same as before minus the seconds passed since I authorized). Basically, that method only told me how much the already available access_token would live, without refreshing or changing anything, so, despite the obvious security concerns it raises, it is pretty useless too.
I then tried what someone else suggested, that is using the old REST API to do the job, issuing a GET request to the following address: https://api.facebook.com/method/auth.extendSSOAccessToken?access_token=EXISTING_ACCESS_TOKEN which obviously failed too with the infamous "The access token was not obtained using single sign-on" error.
After those failed attempte, I started thinking about what may be the cause of all of them failing. As I anticipated, my app runs on Android devices but makes triggers HTTP requests to the API directly, which I guess may be the root of the problem.
In the advanced section of my developer apps page, my app was configured as "Web" rather than "Native/Desktop". That said, changing it to "Native/Desktop" did nothing but give me a longer-lived access_token at the first logout (about 24 hours rather than 1-2), while the already described attempts at extending its life failed just as before.
The official guideline has an interesting and quite creepy paragraph: "Desktop applications will not be able to extend the life of an existing access_token and the user must login to facebook once the token has expired". While this seems to have been overlooked by many, I started to think this may be the cause of my problems, so I tried an alternative approach, that is, I tried the server-side authentication rather than the client side one: again, this requires client_secret so would be a dumb solution for an Android app but I wanted to try that anyway. So, I got the code first, and then the access_token after that (as described in http://developers.facebook.com/docs/authentication/server-side/). This resulted in a much longer lived access_token (5183882 seconds, that is about 59 days), but then again, both the known means for extending it (even if not really needed in this case) resulted in the same thing: the former not refreshing anything, the latter complaining about the fact it was not obtained via SSO.
So, very long story short (I know, too late), the deadline for deprecating offline_access is so close you can feel it breathing on your neck, and nothing seems to work. What is your experience with all of this and, if you're on the same boat as I am and you managed to get it working, how did you do it?
Thanks for your patience.

Do we still need to worry about users turning off cookies?

I've noticed that a lot of sites don't bother anymore with work-arounds so users who have turned their cookies off can still get the same experience on the site. Has that problem just gone away in modern web development? Have we gotten to a point where nobody does it, so we don't need to bother?
I think I put this in the same category as JavaScript. Most people will have cookies enabled, but there will be a few people who have them turned off. There isn't the scare like there was in the mid 90s about evil corporations tracking you all over the net etc. People have become more accepting about how the web works and what is required to have the convenience of web sites remembering who you are etc.
Some people still turn off cookies every once in a while. Usually because they wanted to test something and then forget to leave them off. Nowadays most web apps require cookies on so I think it's perfectly acceptable that instead of complex workarounds to provide the same user experience with or without cookies you can live with just a simple check and a message stating that without cookies the user won't be able to use the site.
There are lots of major websites that behave this way.
My 2c: cookies are good by default and Javascript is evil by default.
As to what general user sentiment is... I'd do cookie detection still so that you can display a meaningful error rather than simply not working if your users are blocking cookies for whatever reason. Don't bother trying to work around it though.
I'm going to guess that it'd be worthwhile running a test to see specifically if your visitors have cookies turned off, because different groups would have different issues. (if it's paranoia, security restrictions, etc.) A website catering to government employees might see higher percentages of cookie non-acceptance than other sites.
As some browsers (or plugins) allow customizing your acceptance of cookies by server or domain, it's possible that even two sites with identical user populations might have different levels of 'trust', if the users believe that one site seems shady.

Prevent anyone from executing your web service?

I've got a webservice which is executed through javascript (jquery) to retrieve data from the database. I would like to make sure that only my web pages can execute those web methods (ie I don't want people to execute those web methods directly - they could find out the url by looking at the source code of the javascript for example).
What I'm planning to do is add a 'Key' parameter to all the webmethods. The key will be stored in the web pages in a hidden field and the value will be set dynamically by the web server when the web page is requested. The key value will only be valid for, say, 5 minutes. This way, when a webmethod needs to be executed, javascript will pass the key to the webmethod and the webmethod will check that the key is valid before doing whatever it needs to do.
If someone wants to execute the webmethods directly, they won't have the key which will make them unable to execute them.
What's your views on this? Is there a better solution? Do you forsee any problems with my solution?
MORE INFO: for what I'm doing, the visitors are not logged in so I can't use a session. I understand that if someone really wants to break this, they can parse the html code and get the value of the hidden field but they would have to do this regularly as the key will change every x minutes... which is of course possible but hopefully will be a pain for them.
EDIT: what I'm doing is a web application (as opposed to a web site). The data is retrieved through web methods (+jquery). I would like to prevent anyone from building their own web application using my data (which they could if they can execute the web methods). Obviously it would be a risk for them as I could change the web methods at any time.
I will probably just go for the referrer option. It's not perfect but it's easy to implement. I don't want to spend too much time on this as some of you said if someone really wants to break it, they'll find a solution anyway.
Thanks.
Well, there's nothing technical wrong with it, but your assumption that "they won't have the key which will make them unable to execute them" is incorrect, and thus the security of the whole thing is flawed.
It's very trivial to retrieve the value of a hidden field and use it to execute the method.
I'll save you a lot of time and frustration: If the user's browser can execute the method, a determined user can. You're not going to be able to stop that.
With that said, any more information on why you're attempting to do this? What's the context? Perhaps there's something else that would accomplish your goal here that we could suggest if we knew more :)
EDIT: Not a whole lot more info there, but I'll run with it. Your solution isn't really going to increase the security at all and is going to create a headache for you in maintenance and bugs. It will also create a headache for your users in that they would then have an 'invisible' time limit in which to perform actions on pages. With what you've told us so far, I'd say you're better off just doing nothing.
What kind of methods are you trying to protect here? Why are you trying to protect them?
ND
MORE INFO: for what I'm doing, the visitors are not logged in so I can't use a session.
If you are sending a client a key that they will send back every time they want to use a service, you are in effect creating a session. The key you are passing back and forth is functionally no different than a cookie (expect that it will be passed back only on certain requests.) Might as well just save the trouble and set a temporary cookie that will expire in 5 minutes. Add a little server side check for expired cookies and you'll have probably the best you can get.
You may already have such a key, if you're using a language or framework that sets a session id. Send that with the Ajax call. (Note that such a session lasts a bit longer than five minutes, but note also it's what you're using to keep state for the users regular HTPP gets and posts.)
What's to stop someone requesting a webpage, parsing the results to pull out the key and then calling the webservice with that?
You could check the referrer header to check the call is coming from one of your pages, but that is also easy to spoof.
The only way I can see to solve this is to require authentication. If your webpages that call the webservice require the user to be logged in then you can check the that they're logged in when they call the webservice. This doesn't stop other pages from using your webservice, but it does let you track usage more and with some rate limiting you should be able to prevent abuse of your service.
If you really don't want to risk your webservice being abused then don't make it public. That's the only failsafe solution.
Let's say that you generate a key valid from 12.00 to 12.05. At 12.04 i open the page, read it with calm, and at 12.06 i trigger action which use your web service. I'll be blocked from doing so even i'm a legit visitor.
I would suggest to restrain access to web services by http referrer (allow only those from your domain and null referrers) and/or require user authentication for calling methods.

Is there much of an anti-cookie movement anymore?

I'm not sure whether this belongs on StackOverflow or on ServerFault, so I've picked SO for as first go.
A number of years ago, there was a highly visible discussion about mis-use of HTTP cookies, leading to various cookie filtering proxys and eventually to active cookie filtering in browsers like Firefox and Opera. Even now, Google will admit that currently about 7% of end-users will reject their tracking cookies, which is quite a lot, actually.
I still vett all cookies that get set in my browser. I have for years. I personally do not know anyone else who does this, but it has given me a few interesting insights into web tracking. For instance, there are many many more sites using Google Analytics than there were even two years ago. And there are still sites (extremely few, fortunately) which malfunction hideously if you don't let them set cookies. But advertisers in particular are still setting cookies to track your way across the web.
So is there much of an anti-cookie movement anymore? Has anyone tried to take Google to task for setting so many with Analytics? Is anyone trying to vilify sites like Ebay and PayPal who use a dodgy cross-site cookie to let you login?
Or am I making too much of a stupidly small problem?
Nowadays, there are other ways to block these annoyances. Rick752's EasyList has the EasyPrivacy list, which blocks most of them with no work at all other than adding the subscription once to Adblock Plus. NoScript can (with a little configuration, mostly removing some misguided entries on the default whitelist) easily block the ones which depend on JavaScript.
That said, I set up my browser to empty all the cookies on logout. Then they can track you only for the duration of a session, which will be short unless you tend to keep your browser open for a long time (or use the session save/restore all the time).
If you use Flash, know that it also has a kind of cookies, and the interface to manage them is most probably poorer than your browser's.
There's always people who misunsderstand cookies - on both sides. Ultimatey, it's up to the browsers to properly identify the sites for cookies. As long as the site's being set properly and the browser's respecting that, it's just not much of a problem. I think thta, with the increased use of web toolkits that take care of the programmatic details (and better, slightly more security-conscious browsers), it's not much of an issue now for end-users.
Beyond that, the proliferation of DHTML and XML-based partial-page-loading mechanisms (as well as database-backends and similar), the need to track session between stateless pages is reduced now. Your web app can very easily keep state without the need for cookies, and that may well have partially been driven by the number of [generally misinformed] end-users who blocked cookies all together.
In shorter words: "IMHO, no".
I gave up both as user and developer.
As a user the convenience of staying logged into sites is just too tempting, the pain of some sites not working too annoying. And I'm not that sensitive about my privacy, so I stopped caring and let all cookies through.
As a developer I always try to be as RESTful as possible, but I don't know any decent way of handling authentication without cookies. HTTP Basic Auth is just too broken, I can't assume HTTPS all the time and mangling URLs is painful and inelegant. What's left is form-based authentication with cookies. So my applications have one auth cookie -- I don't need any more than that, but that by itself requires the user to have cookies on if they want to authenticate themselves. Maybe OpenID and other federated identity services might fix that one day, but at the moment I can't rely on any of these yet.
My biggest annoyance with cookies is that I want to block Analytics cookies but at the same time I need to login to analytics to manage some customer sites. As far as I can tell they are the same cookie (in fact it may be the same cookie across all google services).
I really don't trust the Google cookie. They were apparently one of the first large companies to set cookie expiration to 2038 (the maximum) and their business model is almost entirely advertising based (targeted advertising at that). I suspect they know more about the day-to-day online activities and interests of people than any other government or organisation on the planet.
That's not to say it's all evil or anything but that really is a lot of trust to be given one entity. They may claim it's all anonymised but I'm pretty sure that claim would be hard to verify. At any rate there is no guarantee that this data won't be stolen, legally acquired or otherwise misused at some future point for other purposes.
It isn't impossible that one day this kind of profiling could be used to target people for more serious things than ads. How hard would it be for some future Hitler to establish the IP addresses, bank accounts, schools, employers, club memberships etc of some arbitary class of person for incarceration or worse?
So my answer is that this is not a small problem and history has already taught us many times over what can happen when you start classifying and tracking people. Cookies are not the only means but they are certainly a part of the problem and I recommend blocking them and clearing at every convenient opportunity.
I am also one of the hold-outs who doesn't automatically accept cookies. I do appreciate sites that need fewer, and I am more likely to return to those sites and allow cookies from them in the future.
That said, I do think that being vigilant about cookies is not (rationally) worth the effort. (In other words, I expect I will keep doing what I'm doing because it makes me feel better, even though I don't have evidence of commensurate tangible benefit.)
Every now and again I clear all my cookies. It's a pain as I then have to login to sites again (or set preferences) but this is also a good test as to whether either me or my browser can remember the login details..