Facebook Connect not setting cookies - cookies

I'm trying to implement Facebook Connect on a website with .NET MVC using C#.
I've followed the instructions here: http://wiki.developers.facebook.com/index.php/Trying_Out_Facebook_Connect step by step. I can make the login work as in that when I log in through the site I'm also logged into Facebook.
In order to work with this in the server I think I need to access the cookies Facebook is supposed to leave like:
APIKEY_user
APIKEY_session_key
...
as mentioned here http://wiki.developers.facebook.com/index.php/Verifying_The_Signature.
The thing is I'm not getting any of these cookies. I've googled and it seems like I'm the only person with this problem. Any ideas as to what I could be doing wrong ? Has this happened to anyone else ?

The issue was that I was developing locally using localhost.
I resolved the problem by changing the settings for the application to point to a certain web address instead of localhost and changing my hosts file lo point that same web address to 127.0.0.1

from the UI/client-side perspective, always insure you have the correct path indicated for the xd_receiver file in your FB.init() method.

Firecookie is very useful for seeing what Cookies are/aren't being set.

Related

How to set cookies in nextjs

In my next.js project,I want to set cookies when user logs in. with document.cookies(something) it is setting cookies, but it is limiting to set only one cookie. If I give more than one cookie it is taking only the first element. In both cases I am not able to get cookie values in the pages.It is giving document is not defined error.I tried using
https://github.com/js-cookie/js-cookie,
with this I am able to set and get cookies,I am not able to secure my cookies. It will be great if you can solve this or suggest me some methods.
Thanks in advance.
I'd suggest using https://www.npmjs.com/package/nookies as it's kinda tricky to do manually.
You can't use the secure flag when your app is running on localhost unless you are running the application on https. To test if the secure flag is working, deploy the application on production or testing environment.

Google: Permission denied to generate login hint for target domain NOT on localhost

I am trying to create a Google sign-in and getting the error:
Permission denied to generate login hint for target domain
Before you mark this a duplicate, this is not the same as the question asked at Google sign in website Error : Permission denied to generate login hint for target domain because in that case the questioner was on localhost, whereas I am getting this error on the server.
Specifically, I have included the url of the server in the Authorized Javascript Origins, as in the following image:
and when I get the error, the request shows that the same url was sent, as in the following image:
Is there something else I should be putting in my Restrictions page? Is there any way to figure out what is going on here? Is there a log at the developer console that can tell me what is happening?
Okay, I figured this out. I was using an IP address (as in "http://175.132.64.120") for the redirect uri, as this was a test site on the live server, and Google only accepts actual urls (as in "http://mycompany.com" or "http://localhost") as redirect uris.
Which, you know, THEY COULD HAVE SAID SOMEWHERE IN THE DOCUMENTATION, but whatever.
I know this is an old question, but it's the first result when you look for the problem via Google, so I'll share my solution with you guys.
When deploying Google OAuth service in a private network, namely some IP that can't be accessed via the Internet, you should use a magic DNS service, like xip.io that will give you an URL that your browser will resolve to your internal IP. You see, Google needs to be able to reach your authorized origin via your browser, that's why setting localhost works if you're serving it on your computer, but it won't work when you're deploying outside the Internet, as in a VPN, intranet, or with a tunnel.
So, the steps:
get your IP address, the one you're deploying at and it's not a public domain, let's say it's 10.0.0.1 as an example.
add http://10.0.0.1.xip.io to your Authorized Javascript Origins on the Google Developer Console.
open your site by visiting http://10.0.0.1.xip.io
clear your cache for the site, if necessary.
Log in with Google, and voilĂ .
I got to this solution using this answer in another question.
If you are using http://127.0.0.1/projects/testplateform, change it into http://localhost/projects/testplateform, it will work just fine.
If you testing in your machine (locally). then dont use the IP address (i.e. http://127.0.0.1:8888) in the Client ID configuration , but use the local host instead and it should work
Example: http://localhost:8888
To allow ip address to be used as valid javascript origin, first add an entry in your /etc/hosts file
10.0.0.1 mydevserver.com
and then add this domain mydeveserver.com in Authorized Javascript Origins. If you are using some nonstandard port, then specify it with your domain in Authorized Javascript Origins.
Note: Remove your cache and it will work.
Just ran across this same issue on an external test server, without a DNS entry yet. If you have permission on your local machine just edit your /etc/hosts file:
175.132.64.120 www.jimboweb.com
And use use http://www.jimboweb.com as an authorized domain.
I have a server in private net, ip 172.16.X.X
The problem was solved with app port ssh-forwarding to my localhost port.
Now I am able to use deployed app with google oauth browsing to localhost.
ssh -N -L8081:localhost:8080 ${user}#${host}
I also add localhost:8081 to "Authorized URI redirect" and "Authorized JavaScript sources" in console.developers.google.com:
google developers console
After battling with it for a few hours, I found out that my config in the Google Cloud console was all correct and similar to the answers provided. Due to caching issues or something, I had to recreate a OAuth Client ID and then it suddenly started working.
Its a pretty old issue, but I encountered it and there wasn't any helpful resource, as such I am posting my solution.
For me the issue was when I hosted my web-app locally, a using google-auth for logging in.
The URL I was trying to hit was :- http://127.0.0.1:8000/master
I just changed from IP to http://localhost:8000/master/
And it worked. I was able to log in to the website using Google Auth.
Hope this helps someone someday.
install xampp and run apache server,
put your files (index and co) in a folder in the xampp dir (c:\xampp\htdocs\yourfolder).
Type this in your browser url - http://localhost/yourfolder/index.html

Delete postman cache

I use Postman extension to check out my RESTful APIs
I am trying to make a request to my "localhost", but it seems to have cached one of the query parameters.
I tried clearing cache of my chrome browser but this does not seem to work. I went to the extent of even changing the API resource name.
Has anyone come across such an issue?
Cache-Control request header can be used but one thing to clarify
no-cache does not mean do not cache. In fact, it means on every HTTP request it "revalidate with server" before using any cached response. If the server says that the resource is still valid then the cache will still use the cached version.
while no-store is effectively asking to not cache at all and is intended not to to store anything in the cache.
I tried the solution above and it didn't work for me. What worked was restart the application. I'm using eclipse and running a spring boot application.
In case someone is using the same environment and facing the same problem it may help.
I suggest to use Postman App rather than the extension because with postman app you can do lot more cool things like you can use the console to debug your APIs, create/delete cookies and cache with excellent GUI.
I came across same situation where the request are cached in Postman. I deleted JSESSIONID cookie from Cookies section on PM rather closing the PM app, it solved my problem (means - the call reached to my localhost app) and got accurate response. Please try it if someone needs this solution.
I usually just request the data on a chrome incognito tab/firefox private tab and I guess that this just resets the cache and then it appears on my Postman app.
(I would recommend using the Postman app instead of the website as it has many more features!)

Coldfusion Missing template handler served as http instead of https

Our Coldfusion webpage is served using https, but we sometimes get the dreaded error "Do you want to view only the webpage content that was delivered securely?"
By using Httpwatch, I can see that it happens when the Coldfusion Missing Template Handler is called; the page missingtemplate.cfm is served using http. How can I configure it to always use https?
Given you say the missing files in this case are /CFIDE/scripts/cfform.js and /CFIDE/scripts/masks.js can you not work around the issue by establishing a /CFIDE/scripts virtual directory so that the web server doesn't think they're missing? If you don't want to be giving access to /CFIDE/scripts (which some people will rail against), then youc ould relocate them to your website dir and point CF at them with <cfajaximport>.
That said, this masks the issue, rather than solving it. However as per my comment against your question, I'm curious as to how CF is involved in a 404 situation with a non-CF file? It should be the web server dealing with that sort of thing, not CF. Is there some piece of the puzzle here you are not stating which would help explain this?

Secure Browsing Method of Getting Facebook Photos Using APIs

Using the facebook graph you can get photo information as follows:
https://graph.facebook.com/20531316728
However the link they provide to actually grab the photos are not secure and use http:
http://profile.ak.fbcdn.net/hprofile-ak-snc4/174597_20531316728_2866555_s.jpg
Replacing http with https doesn't do the trick because you get a security warning:
https://profile.ak.fbcdn.net/hprofile-ak-snc4/174597_20531316728_2866555_s.jpg
Facebook is insisting that all apps use secure browsing and use https. However my app uses facebook photos, which cannot be accessed because they begin with http.
Does anyone know how to get around this problem?
I found the answer to my own question. You can add a parameter to get a the ssl parameter:
https://graph.facebook.com/20531316728&return_ssl_resources=1
I've never come across a way to ask the API for valid https versions of the images other than for profile pictures. That is done by https://graph.facebook.com/{userId/Name}/picture
Here's Zuck: https://graph.facebook.com/4/picture and https://graph.facebook.com/zuck/picture
If you're using the PHP SDK, this was a F***ing life-saver (where $album['cover_photo'] is the id of a photo):
$this->facebook->api($album['cover_photo'],'GET',array('return_ssl_resources'=>1));
Whenever i would simply add &return_ssl_resources=1 to the end of the query itself my server would throw a 500 error. I found another thread that showed that you can pass this argument in an array.