Before you start yelling at me, I know many users already asked for something like this, but I read all of them and couldn't find any reply related to my specific case: I eventually managed to get something working but it's not what I think I (and other developers) are looking for. I want to share my experience about this with all of you, so I'll try and describe my scenario and the steps I followed to look into how to take care of this, so please indulge me for this long post: I'm sure it will help some developers in the same situation as I am to clear their minds too, just as I hope it will give others the right information to help me (and others) with it.
I wrote a native Android application that makes use of the Facebook API. I DO NOT make use of the Facebook SDK, because I don't want to rely on the official app being installed on the device (as a matter of fact, my app is in part an alternative to that app so it would be silly to need it installed anyway in the first place), but I rather issue Graph API calls directly via HTTP and handle the responses myself. So if that is the answer you're thinking of giving me, please don't because I won't take that road.
As such, I made use of the Client-side authentication to authorize my app, displaying the URL in a WebView and getting the access_token at the end. I requested offline_access among the other permissions.
Since offline_access is going to be deprecated in May, I started investigating how to get long lived tokens anyway, and so read almost everything I could find related to that, including of course the official guidelines. Long story short, nothing worked for me, and I'm still stuck with very short-lived access_tokens that I can do nothing about.
This is what I did to begin:
Deprecated the offline_access for my app (well not THE app since it's being used by many users right now, but another one which is basically the same and I use for testing purposes only so that's the same thing) in the settings.
Authorized a user using Client-side authentication: https://www.facebook.com/dialog/oauth?client_id=MY_APP_ID&redirect_uri=http://my.domain.com/yeah.htmlscope=publish_stream,read_stream,user_photos,friends_photos,offline_access&response_type=token&display=wap
I got my access_token, but I immediately noticed how it was not long-lived at all, quite the opposite: expires_in was set to something like 6800 seconds (less than two hours). So the first assumption I had made (access_tokens will be longer lived by default) was already wrong.
I looked into how this access_token lifetime could be extended then, and tried almost every alternative out there. Needless to say, every attempt failed. That's what I tried, to be precise:
First of all, I of course tried the "official" approach, that is extending the token through the new endpoint. Skipping for now the rant about how stupid it is to request the client secret for such an operation (as many folks already pointed out, such secret would need to be embedded in the Android app, which is a security nightmare as far as we developers are concerned, and moving this bit server-side to extend the token life on behalf of the user is a nightmare for what concerns them instead, since they'd need to trust me with messing with their access_token), I tried issuing a GET request to that address using the correct parameters: https://graph.facebook.com/oauth/access_token?client_id=APP_ID&client_secret=APP_SECRET&grant_type=fb_exchange_token&fb_exchange_token=EXISTING_ACCESS_TOKEN ...The request was apparently successful, but it did NOT extend the lifetime of anything. The request just returned the same access_token as before, with an expires_in parameter that just reflected the sand of time flowing away (the same as before minus the seconds passed since I authorized). Basically, that method only told me how much the already available access_token would live, without refreshing or changing anything, so, despite the obvious security concerns it raises, it is pretty useless too.
I then tried what someone else suggested, that is using the old REST API to do the job, issuing a GET request to the following address: https://api.facebook.com/method/auth.extendSSOAccessToken?access_token=EXISTING_ACCESS_TOKEN which obviously failed too with the infamous "The access token was not obtained using single sign-on" error.
After those failed attempte, I started thinking about what may be the cause of all of them failing. As I anticipated, my app runs on Android devices but makes triggers HTTP requests to the API directly, which I guess may be the root of the problem.
In the advanced section of my developer apps page, my app was configured as "Web" rather than "Native/Desktop". That said, changing it to "Native/Desktop" did nothing but give me a longer-lived access_token at the first logout (about 24 hours rather than 1-2), while the already described attempts at extending its life failed just as before.
The official guideline has an interesting and quite creepy paragraph: "Desktop applications will not be able to extend the life of an existing access_token and the user must login to facebook once the token has expired". While this seems to have been overlooked by many, I started to think this may be the cause of my problems, so I tried an alternative approach, that is, I tried the server-side authentication rather than the client side one: again, this requires client_secret so would be a dumb solution for an Android app but I wanted to try that anyway. So, I got the code first, and then the access_token after that (as described in http://developers.facebook.com/docs/authentication/server-side/). This resulted in a much longer lived access_token (5183882 seconds, that is about 59 days), but then again, both the known means for extending it (even if not really needed in this case) resulted in the same thing: the former not refreshing anything, the latter complaining about the fact it was not obtained via SSO.
So, very long story short (I know, too late), the deadline for deprecating offline_access is so close you can feel it breathing on your neck, and nothing seems to work. What is your experience with all of this and, if you're on the same boat as I am and you managed to get it working, how did you do it?
Thanks for your patience.
Related
The site of the company I work at is using a Consent Management Platform which was functioning ok. Lately, we had to make some modifications in it and had to reimplement it. The implementation went ok, even the engineers who offer support for the CMP I'm using confirmed that everything I did was fine.
And now the problem: some users are still having the old cookie on their devices. So now when they are entering the site they receive a 400 error and can not access the site anymore. The fix would be so that every user manually deletes the cookie on their device but this is impossible to do as our visitors are not very technical and we can't reach all of them.
So, is there anyway to somehow make any kind of change/implementation, from our side, from the server-side, in order to refresh the users session and make their 400 error disappear without them having to do it manually?
I'm really in a pinch right now and am in need of real advice.
I'm developing a webapp which allows users to log in with their Google accounts, using OAuth2.0.
I've created an OAuth2.0 client ID, configured the OAuth consent screen with the Publishing status set to 'Testing', and added a test user.
The frontend of my app is built with React, and I'm using a package (react-google-login) to handle the flow. I can successfully sign in with the Google account I added as a test user, and retrieve the basic profile information needed.
The problem is I can also sign in with other Google accounts, which have not been added to the list of test users. I imagine that Google should simply not issue access tokens for accounts which are not in the list of test users.
I feel like I've misunderstood something about the OAuth process, or I have configured something incorrectly. I would appreciate if anyone had any pointers?
Thanks.
It is indeed bugged.
I was in the same spot as you, assuming I had misunderstood something. After reviewing my code over and over with no luck, I made a Stack Overflow post, in which I was advised to post to Google's bug tracking system. After doing some troubleshooting with Google they confirmed the bug, and they are now working to fix it (for a little while already).
I included this thread as an example when talking to Google. I meant to post an update here after getting in touch with them, but I forgot, sorry!
The buganizer thread with more details:
https://issuetracker.google.com/issues/211370835
Is it possible you're only asking for the email scope?
It appears the test user filter and possibly the whole concept of the 'app' being in test mode exists only inside the consent screen feature.
For some reason, Google doesn't show the consent screen if you only ask for email.
So... maybe that means you don't need a consent screen, and therefore don't need to care what that feature thinks about your app (that your app is in test mode and needs to be verified before going into production).
Or maybe it's a bug? Or maybe just because you can do this doesn't mean it's allowed by Google's terms. Maybe they just haven't implemented preventing that use case.
Anyway, it may help you to know that if you add a more significant scope like the Calendar API then the following things will change:
Non-test users will get a message like "The developer hasn’t given you access to this app." and won't be able to complete oauth
Test users will get a message like "Google hasn't verified this app"
Test users will see a consent screen
Basically, everything starts working as expected.
By the way, just putting "email" or "profile" for scope seems to be an old way of doing things, and all the newer scopes want you to use a full URL for the scope (despite google themselves not using the full URL when you're configuring your scopes).
For example, if you want the email and calendar scopes, you can put this value for your scope field:
email https://www.googleapis.com/auth/calendar
Or you can use this equivalent value:
https://www.googleapis.com/auth/userinfo.email https://www.googleapis.com/auth/calendar
Not suggesting you add a scope like email for the sake of it, just that it sheds light on what's happening, and if there's a scope like that that you need anyway, adding it will solve your problem.
I use Safari and Firefox. Since Safari doesn't offer the cookie handling I prefer, I frequently invoke a separate tool to clear out most of the cookies. Neither Kayak nor Google are on my exceptions list. A couple of times recently, I cleaned out cache, local storage, flash, etc. to get rid of the so-called zombie cookies (while no webkit clients were running). I was in USA when I did this.
In spite of this, every time I access a Google site, it redirects to google.es and every time I access Kayak.com, it redirects to kayak.uk
I understand about browser fingerprint, but I have occasionally changed a few things that affect that. And even if that weren't enough, there is nothin to hint that they have identified me specifically. I would expect that in the absence of any completely unique identifier, they would assume the country of the IP address(es), which for over four weeks have been Comcast and ATT. (And various public WiFi sites.)
It's not log in—I don't have a login for Kayak and I never log in to Google unless absolutely necessary.
With kayak, I changed it (menu lower right corner of search page) to USA/dollars, but as soon as I go to another site without deleting cookies, when I come back, it's on UK/pounds again.
What might be the cause of this? There are a few other sites behaving similarly.
I think I understand it now. Someone correct me if I missed something.
It was not cookies or tracking or HTML redirect.
It was the browser "remembering" that I had visited kayak.co.uk and google.es while over there and "fixing" the URL for me.
I'm not sure whether this belongs on StackOverflow or on ServerFault, so I've picked SO for as first go.
A number of years ago, there was a highly visible discussion about mis-use of HTTP cookies, leading to various cookie filtering proxys and eventually to active cookie filtering in browsers like Firefox and Opera. Even now, Google will admit that currently about 7% of end-users will reject their tracking cookies, which is quite a lot, actually.
I still vett all cookies that get set in my browser. I have for years. I personally do not know anyone else who does this, but it has given me a few interesting insights into web tracking. For instance, there are many many more sites using Google Analytics than there were even two years ago. And there are still sites (extremely few, fortunately) which malfunction hideously if you don't let them set cookies. But advertisers in particular are still setting cookies to track your way across the web.
So is there much of an anti-cookie movement anymore? Has anyone tried to take Google to task for setting so many with Analytics? Is anyone trying to vilify sites like Ebay and PayPal who use a dodgy cross-site cookie to let you login?
Or am I making too much of a stupidly small problem?
Nowadays, there are other ways to block these annoyances. Rick752's EasyList has the EasyPrivacy list, which blocks most of them with no work at all other than adding the subscription once to Adblock Plus. NoScript can (with a little configuration, mostly removing some misguided entries on the default whitelist) easily block the ones which depend on JavaScript.
That said, I set up my browser to empty all the cookies on logout. Then they can track you only for the duration of a session, which will be short unless you tend to keep your browser open for a long time (or use the session save/restore all the time).
If you use Flash, know that it also has a kind of cookies, and the interface to manage them is most probably poorer than your browser's.
There's always people who misunsderstand cookies - on both sides. Ultimatey, it's up to the browsers to properly identify the sites for cookies. As long as the site's being set properly and the browser's respecting that, it's just not much of a problem. I think thta, with the increased use of web toolkits that take care of the programmatic details (and better, slightly more security-conscious browsers), it's not much of an issue now for end-users.
Beyond that, the proliferation of DHTML and XML-based partial-page-loading mechanisms (as well as database-backends and similar), the need to track session between stateless pages is reduced now. Your web app can very easily keep state without the need for cookies, and that may well have partially been driven by the number of [generally misinformed] end-users who blocked cookies all together.
In shorter words: "IMHO, no".
I gave up both as user and developer.
As a user the convenience of staying logged into sites is just too tempting, the pain of some sites not working too annoying. And I'm not that sensitive about my privacy, so I stopped caring and let all cookies through.
As a developer I always try to be as RESTful as possible, but I don't know any decent way of handling authentication without cookies. HTTP Basic Auth is just too broken, I can't assume HTTPS all the time and mangling URLs is painful and inelegant. What's left is form-based authentication with cookies. So my applications have one auth cookie -- I don't need any more than that, but that by itself requires the user to have cookies on if they want to authenticate themselves. Maybe OpenID and other federated identity services might fix that one day, but at the moment I can't rely on any of these yet.
My biggest annoyance with cookies is that I want to block Analytics cookies but at the same time I need to login to analytics to manage some customer sites. As far as I can tell they are the same cookie (in fact it may be the same cookie across all google services).
I really don't trust the Google cookie. They were apparently one of the first large companies to set cookie expiration to 2038 (the maximum) and their business model is almost entirely advertising based (targeted advertising at that). I suspect they know more about the day-to-day online activities and interests of people than any other government or organisation on the planet.
That's not to say it's all evil or anything but that really is a lot of trust to be given one entity. They may claim it's all anonymised but I'm pretty sure that claim would be hard to verify. At any rate there is no guarantee that this data won't be stolen, legally acquired or otherwise misused at some future point for other purposes.
It isn't impossible that one day this kind of profiling could be used to target people for more serious things than ads. How hard would it be for some future Hitler to establish the IP addresses, bank accounts, schools, employers, club memberships etc of some arbitary class of person for incarceration or worse?
So my answer is that this is not a small problem and history has already taught us many times over what can happen when you start classifying and tracking people. Cookies are not the only means but they are certainly a part of the problem and I recommend blocking them and clearing at every convenient opportunity.
I am also one of the hold-outs who doesn't automatically accept cookies. I do appreciate sites that need fewer, and I am more likely to return to those sites and allow cookies from them in the future.
That said, I do think that being vigilant about cookies is not (rationally) worth the effort. (In other words, I expect I will keep doing what I'm doing because it makes me feel better, even though I don't have evidence of commensurate tangible benefit.)
Every now and again I clear all my cookies. It's a pain as I then have to login to sites again (or set preferences) but this is also a good test as to whether either me or my browser can remember the login details..
Why don't people use CFLOGIN? I remember having problem with it with CF7 some months ago, but I couldn't remember what was wrong with it.
I use cflogin all the time and it works great. It can be a little tricky to get working the way you like, but the benefits are huge. Being able to fine tune your application with user roles takes care of the bulk of my rights based customization. There used to be some issues with session management that made it difficult to work with. Turning on j2ee sessions seems to make most of those issues go away.
Some of the popular frameworks are not compatible with cflogin, so that might be one reason you don't see a lot of it. They tend to have their own approach to securing application features.
I think a lot of people get frustrated with it because it is a little quirky and they give up on it. Others have more complicated security needs that aren't addressed completely by cflogin, so they wind up writing their own system. Specifically, there isn't an easy way to deal with rights by content asset.
The only issue I've had is with roles in CF8. It's brilliantly implemented, and a little cruel that it doesn't work as it quite should. Maybe in CF9.
In any event, building your own roles based system (assign the user a session variable with a comma separated list of access levels that the system can check against) isn't too hard to do and I got over it.
The one nice thing about cfLogin that is probably still worth using is how it ties into the Server monitor to see how many people are logged in, etc.
The point above about using the jsession is true, it's worth doing in all cf apps. One of the best things I dragged myself through to get working how I wanted it.
CFLogin is not used for 3 reasons.
First, it's a little touchy, a little strange, and doesn't work how many would think. You put some code here, and if a user isn't logged in it runs it... that's just odd, you know? It didn't help that there were some bugs early on, either.
Second, while it has the basic required security features for a web application, it doesn't go any further. You can't really extend it easily. Who's to say that's how everybody wants it?
Third, and most realistically, it's because people have already solved that problem. The problem area of securing an application, authentication and authorization has been thought out in the community long enough and most people know how to just do it. CFLogin is reinventing the door. It is too little, too late.
Now, that's not to say that no one uses it. I personally have used it a few times with basic success, but no reason to ring a bell. For most of my applications, it makes more sense to not use CFLogin. The problem domains are this way or that, and CFLogin doesn't always solve it in the most intelligent way.
Do keep in mind that CFLOGIN has a catch with Basic HTTP Auth where it can continue to send its UserID and Password even after you have called CFLOGOUT.
I know this has driven some advanced users away from it.
Here is an excerpt from LiveDocs
Caution: If you use web server-based
authentication or any form
authentication that uses a Basic HTTP
Authorization header, the browser
continues to send the authentication
information to your application until
the user closes the browser, or in
some cases, all open browser windows.
As a result, after the user logs out
and your application uses the cflogout
tag, until the browser closes, the
cflogin structure in the cflogin tag
will contain the logged-out user's
UserID and password. If a user logs
out and does not close the browser,
another user might access pages with
the first user's login.
In my case (suppose for some other people too) the main reason is moving from other platform, say PHP. I mean that I've already got some knowledge and habits in ACL development and started using them in CF.
I know how to make it handy for user, flexible for developer and secure and don't really need to switch to cflogin.
Sometimes the same happens with other stuff, say in most cases I prefer to implement client-side validation using own JS instead of using cfform/cfinput.
Because it (still!) has serious bugs, like this one:
http://www.raymondcamden.com/index.cfm/2009/8/7/Watch-out-for-this-CFLOGIN-Bug