Google OpenID/federated login periodically fails - django

I'm developing a Django app that uses python-openid. The app is running on my development server at home.
Similar to stackoverflow's login mechanism, I'd like users to login to my website using their Google credentials.
The code I've implemented to do this, works well for a couple weeks, and then stops working. I get stuck during the login process on the following Google page: https://www.google.com/accounts/o8/ud with this message: "The page you requested is invalid." It'll randomly start working again, but fails every few weeks or so.
Going through Yahoo's login worked for months, and today has stopped working with the following message: "This page has expired, go back to the original page and please try again" on this page: https://open.login.yahooapis.com/openid/op/auth
Here is the request, as captured by LiveHttpHeaders for Google:
https://www.google.com/accounts/o8/ud
POST /accounts/o8/ud HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.10) Gecko/2009042513 Ubuntu/8.04 (hardy) Firefox/3.0.10
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://127.0.0.1:8000/users/login/
Content-Length:907
openid.ax.if_available=ext1&openid.mode=checkid_setup&openid.ns=http://specs.openid.net/auth/2.0&openid.realm=http://127.0.0.1:8000/accounts/login/&openid.return_to=http://127.0.0.1:8000/users/login/finish/?janrain_nonce=2009-10-05T19%3A10%3A11ZtioiRm&openid.ax.count.ext1=unlimited&openid.ax.mode=fetch_request&openid.sreg.optional=email&openid.claimed_id=http://specs.openid.net/auth/2.0/identifier_select&openid.ns.sreg=http://openid.net/extensions/sreg/1.1&openid.ns.ax=http://openid.net/srv/ax/1.0&openid.identity=http://specs.openid.net/auth/2.0/identifier_select&openid.assoc_handle=AOQobUcnzec0bpeZRztjqPrr5TQUA0aPL7SIuOPOMgWxex2HRAP09AyJ&openid.ax.required=ext0&openid.ax.type.ext0=http://schema.openid.net/namePerson&openid.ax.type.ext1=http://schema.openid.net/contact/web/default
HTTP/1.x 400 Bad Request
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
I'm not sure what's going on here, and would love some help.

It looks like the code you are using is generating a bad URL request. The real URL is https://www.google.com/accounts/o8/id, so try to fix the "ud" at the end changing it by an "id".
Hope this helps!

you can construct the uri and redirect the user to the uri with GET method. If you doing POST google expects some headers which I think it was not mentioned docs. Check the sample request. I tried with GET without python-openid it works pretty well.

You might take a look at the redirect_uri and the state inside to see if they match. I remember having the issue of having mismatched state sometimes ago with Google Login.
Btw if you use Django, I would recommend using social-app-django which is currently active and supports multiple social login options (if at some point you consider adding more social login providers).

Related

how to pass csrf-token in first request

my question is how cookies works, this question is arrived in my mind when i loaded my page for first time i got this
REQUEST HEADER
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Accept-Encoding: gzip, deflate, br
Accept-Language: en-GB,en;q=0.9,en-US;q=0.8
Cache-Control: max-age=0
Connection: keep-alive
Cookie: csrftoken=gsZxmbW4XUpE6YnaQhlrAx9JduyExVgzWEo4fXhcY4V3fbHWVtwf0msbDQDT5r43
Host: 127.0.0.1:8000
Upgrade-Insecure-Requests: 1
when first request was sent it already had csrftoken in cookie
i tried same in incognito window than also i got same result.
how can my browser already have cookie without any communication to server
i am working on django with angular 7, problems is that i am sending my request from angular
this.http.post('http://127.0.0.1:8000/',data, {observe : "response", withCredentials: true } )}
but in response of that i am not getting any csrftoken in setcookie .
please help me ..... sorry for adding two problems in one question but both are indirectly connected to each other
CSRF can only stop to PUT , POST and DELETE, It's always open with GET request, If you are able to send it through GET on very 1st time , then thats well and good .
actually thing was that django was running on 127.0.0.1 and angular was running on localhost( any loop back address) and that's why for security(CORS) issue, browser(chrome) was not allowing me to neither send cookie in request nor set cookie in response . so i had two options one change same-site to none in browser or change my angular setting to render on 127.0.0.1.

Error 401 on WEB API 2 when there is lot of request from Android device

I’m developing an Android App and a Web Service that communicate. My Web Service is in WEB API 2 with token bearer authentication.
My problem is that when I send too many requests (~20 request in 15 seconds) to my Web Service from my Android App, the WS response with
“401” : “Authorization has been denied for this request”
This happen ONLY on the production server (Amen hoster) AND from the Android Device. For example, if I try with Postman, everything works fine. So it’s related to my production server and/or my android app request.
The code for access to the Web Service
URL obj = new URL(SERVEUR_URL + url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
con.setRequestMethod("GET");
con.setRequestProperty("Authorization", "Bearer " + token);
con.setRequestProperty("Content-Type", "application/json");
int responseCode = con.getResponseCode();
String responseMessage = con.getResponseMessage();
The authentication provider on my Web Service is the default one. No modifications.
The request from my Android App (not work every time)
GET http://api.xxxx.com/api/Weesps/GetAvailableWeesps HTTP/1.1
Authorization: Bearer XXXX
Content-Type: application/json
User-Agent: Dalvik/2.1.0 (Linux; U; Android 6.0; Google Nexus 5X - 6.0.0 - API 23 - 1080x1920 Build/MRA58K)
Host: api.xxxx.com
Connection: Keep-Alive
Accept-Encoding: gzip
The request from Postman (work every time)
GET http://api.xxxx.com/api/Weesps/GetAvailableWeesps HTTP/1.1
Host: api.xxxx.com
Connection: keep-alive
Authorization: Bearer XXXX
Cache-Control: no-cache
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36
Postman-Token: bca55154-775d-9709-7a8b-4793393890ad
Accept: */*
Accept-Encoding: gzip, deflate, sdch
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Cookie: dadaproaffinity=14ff51cc869a14d3552485cb4ceee1faa1be7165cc5d4b0e2b19370f11afcbea
What I have tried:
Reproduce this error in local : it works fine on local server (web and SQL servers) from android app or from Postman
I check that the token was sent correctly in every requests
The request from Android is the same every time
Tried to add missing header to my android app request
I spend two days on this problem and read many stackoverflow posts but no one helps me.
Thanks for your help.
UPDATE 1 :
With Fiddler I saw that in GET request from Postman, they were a Cookie header. This cookie is sent when we ask for a bearer token.
Example of token response from the server
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 691
Content-Type: application/json;charset=UTF-8
Expires: -1
Server: Microsoft-IIS/8.5
Set-Cookie: .AspNet.Cookies=XXXX; path=/; HttpOnly
X-Powered-By: ASP.NET
X-Powered-By: ARR/2.5
Date: Tue, 31 May 2016 16:55:39 GMT
{"access_token":"XXXX","token_type":"bearer","expires_in":1209599,"userName":"Foo",".issued":"Tue, 31 May 2016 16:55:40 GMT",".expires":"Tue, 14 Jun 2016 16:55:40 GMT"}
Fiddler and Postman saved this cookie and they automatically put it in requests to API (example on the “The request from Postman” code block). When I remove the cookie from the Postman GET request, it doesn’t work (just like my android app).
Now, the question is: why WEB API 2 send a cookie instead of only using the token ? And why the token work great in the first requests and don’t work properly for the following requests ?
According to ASP.NET WebAPI2 flow you can see on the bottom of that page, it seems your requests are always authenticated but sometimes fail to get authorized.
So imo, the AuthorizationFilter[Authorize] rejects some of your requests for an unknown reason. What I would suggest is to dump the request your API receives as well as the claims identity attached to the token. Try to see if there is differences between them when you have a successful response and when you have a 401.
That way, you may be able to determine either it is your request that got malformed, if it is the claims identity that is not good or if it is the AuthorizationFilter that refuses you for another reason (like too much queries or else).
Good luck !
UPDATE 1
According to your new input, I think that your Web API is configured to use both token and cookie authentication.
What I see here is you have two solutions :
1°/ Store the returned cookie in your Android application and use it for next calls. Simplest and fastest way to solve your problem without changing all your API, but you store an authorization cookie : it can leads to security problem (CSRF attacks).
2°/ You can check how your authentication and authorization filters are set to disable cookie authentication and only rely on token authentication : it will hence forces all the requests and your API to only use token and will prevents you from suffering CSRF attacks. More complex because you have to dig into your web API configuration.
Check the following links (sorry, as I don't have enough reputation yet to post more than 2 links per post, you'll find them as text at the end of my answer) :
ASP.net Secure a Web API 2.2[2] : From the chapter "Configuring the Authorization Server" at the bottom
MSDN article on Web API security[3] : More general and technical information about web api security, how to secure it and CRSF attacks
StackOverflow .NET cookie and token authentication[4] : Check David Banister's answer, I think it is exactly what you want to do : Only use token for all your API calls.
StackOverflow Authorize filter and authentication[5] : More information about such mechanisms for your API
And finally
Cookie authentication with web API and 401 codes[6] : Sounds like your actual problem, isn't it ?
I hope it helps you, good luck !
// Links
2: www.asp.net/web-api/overview/security/individual-accounts-in-web-api
3: msdn.microsoft.com/en-us/magazine/dn201748.aspx
4: stackoverflow.com/questions/22568409/mvc-net-cookie-authenticated-system-acessing-a-web-api-with-token-authenticatio
5: stackoverflow.com/questions/21231751/authorize-filter-and-authentication
6: brockallen.com/2013/10/27/using-cookie-authentication-middleware-with-web-api-and-401-response-codes/
Finally, I got my answer:
My Web Service send a Cookie named “dadaproaffinity” the first time I ask for a request. This Cookie was automatically put on the following request by Postman but not by Android HttpUrlConnection. So, I just take this Cookie and now I just add this Cookie on every requests with the Token.
But : This cookie is send by IIS, not by my Web Service ! That’s why it works on local but not on the production server. I googled this cookie and there are very few responses about that. The only one that I find in English is :
Technical Cookie of IIS Server hosting the site.
Need to route to the correct server session, in order to keep it active
Does anyone have more information about this IIS Cookie ?

URL forbidden 403 when using a tool but fine from browser

I have some images that I need to do a HttpRequestMethod.HEAD in order to find out some details of the image.
When I go to the image url on a browser it loads without a problem.
When I attempt to get the Header info via my code or via online tools it fails
An example URL is http://www.adorama.com/images/large/CHHB74P.JPG
As mentioned, I have used the online tool Hurl.It to try and attain the Head request but I am getting the same 403 Forbidden message that I am getting in my code.
I have tried adding many various headers to the Head request (User-Agent, Accept, Accept-Encoding, Accept-Language, Cache-Control, Connection, Host, Pragma, Upgrade-Insecure-Requests) but none of this seems to work.
It also fails to do a normal GET request via Hurl.it. Same 403 error.
If it is relevant, my code is a c# web service and is running on the AWS cloud (just in case the adorama servers have something against AWS that I dont know about). To test this I have also spun up an ec2 (linux box) and run curl which also returned the 403 error. Running curl locally on my personal computer returns the binary image which is presumably just the image data.
And just to remove the obvious thoughts, my code works successfully for many many other websites, it is just this one where there is an issue
Any idea what is required for me to download the image headers and not get the 403?
same problem here.
Locally it works smoothly. Doing it from an AWS instance I get the very same problem.
I thought it was a DNS resolution problem (redirecting to a malfunctioning node). I have therefore tried to specify the same IP address as it was resolved by my client but didn't fix the problem.
My guess is that Akamai (the service is provided by an Akamai CDN in this case) is blocking AWS. It is understandable somehow, customers pay by traffic for CDN, by abusing it, people can generate huge bills.
Connecting to www.adorama.com (www.adorama.com)|104.86.164.205|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 403 Forbidden
Server: **AkamaiGHost**
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 301
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 09:34:20 GMT
Connection: close
2016-03-23 09:34:20 ERROR 403: Forbidden.
I tried that URL from Amazon and it didn't work for me. wget did work from other servers that weren't on Amazon EC2 however. Here is the wget output on EC2
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:42:33-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.40.219.79
Connecting to www.adorama.com|23.40.219.79|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 403 Forbidden
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 299
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 08:42:33 GMT
Connection: close
2016-03-23 08:42:33 ERROR 403: Forbidden.
But from another Linux host it did work. Here is output
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:43:11-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.45.139.71
Connecting to www.adorama.com|23.45.139.71|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 200 OK
Content-Type: image/jpeg
Last-Modified: Wed, 23 Mar 2016 08:41:57 GMT
Server: Microsoft-IIS/8.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
ServerID: C01
Content-Length: 15131
Cache-Control: private, max-age=604800
Date: Wed, 23 Mar 2016 08:43:11 GMT
Connection: keep-alive
Set-Cookie: 1YDT=CT; expires=Wed, 20-Apr-2016 08:43:11 GMT; path=/; domain=.adorama.com
P3P: CP="NON DSP ADM DEV PSD OUR IND STP PHY PRE NAV UNI"
Length: 15131 (15K) [image/jpeg]
Saving to: \u201cCHHB74P.JPG\u201d
100%[=====================================>] 15,131 --.-K/s in 0s
2016-03-23 08:43:11 (460 MB/s) - \u201cCHHB74P.JPG\u201d saved [15131/15131]
I would guess that the image provider is deliberately blocking requests from EC2 address ranges.
The reason the wget outgoing ip address is different in the two examples is due to DNS resolution on the cdn provider that adorama are providing
Web Server may implement ways to check particular fingerprint attributes to prevent automated bots . Here a few of them they can check
Geoip, IP
Browser headers
User agents
plugin info
Browser fonts return
You may simulate the browser header and learn some fingerprinting "attributes" here : https://panopticlick.eff.org
You can try replicate how a browser behave and inject similar headers/user-agent. Plain curl/wget are not likely to satisfied those condition, even tools like phantomjs occasionally get blocked. There is a reason why some prefer tools like selenium webdriver that launch actual browser.
I found using another url also being protected by AkamaiGHost was blocking due to certain parts in the user agent. Particulary using a link with protocol was blocked:
Using curl -H 'User-Agent: some-user-agent' https://some.website I found the following results for different user agents:
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0 okay
facebookexternalhit/1.1 (+http\://www.facebook.com/externalhit_uatext.php): 403
https ://bar: okay
https://bar: 403
All I could find for now is this (downvoted) answer https://stackoverflow.com/a/48137940/230422 stating that colons (:) are not allowed in header values. That is clearly not the only thing happening here as the Mozilla example also has a colon, only not a link.
I guess that at least most webservers don't care and allow facebook's bot and other bots having a contact url in their user agent. But appearently AkamaiGHost does block it.

Non persistent Authentication cookie in a SPA AngularJS / Django REST

I have been wrestling with this issue for several hours:
I have a Single-Page Application written in Angular which communicates with a DjangoREST backend. I am trying to implement an auth fonction with session Cookies. The way I see it is:
1/ Show any unlogged visitor a login page
2/ Make a POST to url/login with the credentials
3/ Obtain a "sessionid" cookie and writing in a service that the user is logged
4/ Redirect vistor towards reserved content and used get & post to access contents with the cookie
The login endpoint is already set and works. When I make a post, I receive a HTTP 200 response with user info and a Set-Cookie, but subsequent calls do not contain the Cookie:
Request URL: ...
Request Method:POST
Status Code:200 OK
Request Headersview source
Accept:application/json, text/plain, */*
Accept-Encoding:gzip,deflate,sdch
Accept-Language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Connection:keep-alive
Content-Length:38
Content-Type:application/x-www-form-urlencoded; charset=UTF-8
Host:devinify1.herokuapp..
Origin:http://mobilevinify.herokuapp...
Referer:http://mobilevinify.herokuapp...
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.63 Safari/537.36
Form Dataview sourceview URL encoded
username:felix#vinify.co
password:test
Response Headersview source
Access-Control-Allow-Origin:*
Connection:keep-alive
Content-Length:189
Content-Type:application/json
Date:Sat, 14 Dec 2013 20:45:14 GMT
Server:gunicorn/18.0
Set-Cookie:sessionid=ijz27zy655qn0cwmlnvr66609hsyvdub; expires=Sat, 28-Dec-2013 20:45:14 GMT; Max-Age=1209600; Path=/
Vary:Cookie
My code is a very simple adaptation of the angular-app example:
https://github.com/FelixLC/MobileWebApp/blob/master/app/scripts/security/security.js
I have tried this on localhost et on heroku. The server and the client are on different domains, CORS are allowed.
When I try to make calls, I receive an error from Django
TypeError at /vinibarwines/
int() argument must be a string or a number, not 'AnonymousUser'
Should I try to get this cookie and put it in the headers with angularJS?
You can try to login at http://mobilevinify.herokuapp.com/#/login with felix#vinify.co & test. Then Click on Vinibar, there is a 500 internal error on the GET request
Any help much appreciated
Felix
Here is the full layout of how I actually do my authentication. Django/Angular Authentication. It's a pretty extensive response, I'm more than happy to answer further questions you might have.

C++ HTTP always 301 using sockets

I'm sick of this. ALWAYS when I make a HTTP GET query from a C/C++ program using just plain sockets I get 301 Moved Permanently's. Normally I'd use libcURL, but in this case I don't want to add another library, I just need to download one flat identification file from one fixed server.
This is my current query:
GET /game/getversion.jsp?user=nightcracker&password=yeahright&version=12 HTTP/1.1\r\n
Connection: close\r\n
Host: www.minecraft.net\r\n
Accept-Encoding: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2\r\n
\r\n
I have tried EVERYTHING, and everything just gets answered with this funny message:
HTTP/1.1 301 Moved Permanently
Server: nginx/0.6.32
Date: Tue, 15 Mar 2011 02:18:11 GMT
Content-Type: text/html
Content-Length: 185
Connection: close
Location: http://www.minecraft.net/game/getversion.jsp?user=nightcracker&password=yeahright&version=12
<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/0.6.32</center>
</body>
</html>
I remember this issue from before and I ragequitted before. Now I want to fix this damn bugger. So tell me SO, why do all my HTTP queries always give back a 301?
Alright, besides the issue with the Accept-Encoding, the query was fine. The problem was that I resolved in my socket code to "minecraft.net" instead of "www.minecraft.net". RAAAAH. Fixed.
I can't see anything obviously wrong since the redirected URI appears to be the same as the original GET request URI, so I would suggest downloading the command-line curl and running that in verbose mode against the same target. Perhaps it will show something in its output that can point you in the right direction. There's a chance that this is a badly-configured server or badly written JSP, so keep that in mind.
I don't know if this is the problem you have on the Minecraft server (I don't have an account) but
Accept-Encoding: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2\r\n
what the heck is that? Header fields that might go in requests include
Accept: MIME types (e.g. what you have there)
Accept-Charset: charsets (e.g. utf-8)
Accept-Encoding: encodings (e.g. gzip)
Accept-Language: languages (e.g. en)
and you seem to be mixing them up.
Well, the server is redirecting the client to another location. You just have to issue another
request to the URL coming back in the "Location" header of the 3xx respone
OOPs realized that the redirect location is the same as the original URI. DOes this URL work from the browser? If so you might try adding a User-Agent header in the request that contains the same User-Agent that the browser is sending.
You can either specify the correct URL (www.minecraft.net) or tell libcurl to follow redirects automatically:
curl_easy_setopt(curl_handle,CURLOPT_FOLLOWLOCATION,1);