We are catching a BigCommerce webhook event in our Google Cloud Run application. The request looks like:
Headers
host: abc-123-ue.a.run.app
AccountId: ABC
Content-Type: application/json
Password: Goodbye
Platform: BC
User-Agent: akka-http/10.1.10
Username: Hello
Content-Length: 197
Connection: keep-alive
Body
{"created_at":1594914374,"store_id":"1001005173","producer":"stores/gy68868uk5","scope":"store/product/created","hash":"139fab64ded23b3e1b8473ba24ab21bedd3f535b","data":{"type":"product","id":132}}
For some reason, this causes a 400 response from Google Cloud Run. Our application doesn't even seem to be passed the request. All other endpoints work (including other post requests).
Any ideas?
Edit
In the original post, I had the path in the host header. This was a mistake made in creating this post and not the actual value passed to us. We can only inspect the request via Requestbin (I can't find the request values anywhere in Google logs) so I'm speculating on the host value and made a mistake writing it out here.
Research so far...
So upon further testing, it seems that BigCommerce Webhooks also fail to send to any Google Cloud Function we set up. As a workaround, I'm having Pipedream catch the webhook and send the payload to our application. No problems there. This endpoint also works with mirror payloads from local and Zapier which seems to eliminate authentication errors.
We are running FastAPI on Google Run and the simplest function on Google Cloud Functions. This seems to be an error with how Google Serverless and BigCommerce Webhook Events communicate with each other. I'm just not sure how...
Here are the headers we managed to capture on one of the only times a BigCommerce Webhook Event came through to our Google Cloud Function:
Content-Length: 197
Content-Type: application/json
Host: us-central1-abc-123.cloudfunctions.net
User-Agent: akka-http/10.1.10
Forwarded: for="0.0.0.0";proto=https
Function-Execution-Id: unes7v34vzyo
X-Appengine-Country: ZZ
X-Appengine-Default-Version-Hostname: f696ddc1d56c3fd66p-tp.appspot.com
X-Appengine-Https: on
X-Appengine-Request-Log-Id: 5f10e15c00ff082ecbb02ee3a70001737e6636393664646331643536633366643636702d7470000165653637393633633164376565323033383131366437343031613365613263303a36000100
X-Appengine-Timeout-Ms: 599999
X-Appengine-User-Ip: 0.0.0.0
X-Cloud-Trace-Context: a62207698d141465d0f38488492d088b/9870406606828581415
X-Forwarded-For: 0.0.0.0
X-Forwarded-Proto: https
Accept-Encoding: gzip
Connection: close
> host: abc-123-ue.a.run.app/bigcommerce/webhooks/
This is most likely the issue. Host headers must contain only the hostname, not the request /paths.
You can clearly see this will fail:
$ curl -IvH 'Host: pdf-2wvlk7vg3a-uc.a.run.app/foo' https://pdf-2wvlk7vg3a-uc.a.run.app
...
HTTP/2 400
However if you don't craft the Host header yourself, it will work.
Related
We have cached html and png pages in Akamai by changing the waa config. But unable to validate it through fiddler, live http headers or through curl commands. Below are the screenshots. Please help if I missed any headers
Fiddler:
live http header
Curl command :
$ curl -H "Pragma: akamai-x-cache-on, akamai-x-cache-remote-on, akamai-x-check-cacheable, akamai-x-get-cache-key, akamai-x-get-extracted-values, akamai-x-get-nonces, akamai-x-get-ssl-client-session-id, akamai-x-get-true-cache-key, akamai-x-serial-no" -IXGET "url"
Response :
HTTP/1.1 200 OK
Date: Thu, 20 Oct 2016 19:15:53 GMT
Content-Length: 19836
Content-Type: image/png
X-FRAME-OPTIONS: DENY
Well in Akamai WAF(Web Application Firewall) there are few WAF rules that prevent display of pragma headers to users. You need to create an exception for those WAF rules and add only trusted IP's to it. Then you will be able to see the information that you are looking for. Thanks, Vinod
I would double check that you are definitely sending that Pragma header with the request and that it is also properly formed. I've seen a lot of problems with people trying to set this up and not getting it right.
Also it's worth reviewing your Akamai configuration because it is also possible to switch this off - some clients prefer to do this for security reasons.
I’m developing an Android App and a Web Service that communicate. My Web Service is in WEB API 2 with token bearer authentication.
My problem is that when I send too many requests (~20 request in 15 seconds) to my Web Service from my Android App, the WS response with
“401” : “Authorization has been denied for this request”
This happen ONLY on the production server (Amen hoster) AND from the Android Device. For example, if I try with Postman, everything works fine. So it’s related to my production server and/or my android app request.
The code for access to the Web Service
URL obj = new URL(SERVEUR_URL + url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
con.setRequestMethod("GET");
con.setRequestProperty("Authorization", "Bearer " + token);
con.setRequestProperty("Content-Type", "application/json");
int responseCode = con.getResponseCode();
String responseMessage = con.getResponseMessage();
The authentication provider on my Web Service is the default one. No modifications.
The request from my Android App (not work every time)
GET http://api.xxxx.com/api/Weesps/GetAvailableWeesps HTTP/1.1
Authorization: Bearer XXXX
Content-Type: application/json
User-Agent: Dalvik/2.1.0 (Linux; U; Android 6.0; Google Nexus 5X - 6.0.0 - API 23 - 1080x1920 Build/MRA58K)
Host: api.xxxx.com
Connection: Keep-Alive
Accept-Encoding: gzip
The request from Postman (work every time)
GET http://api.xxxx.com/api/Weesps/GetAvailableWeesps HTTP/1.1
Host: api.xxxx.com
Connection: keep-alive
Authorization: Bearer XXXX
Cache-Control: no-cache
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36
Postman-Token: bca55154-775d-9709-7a8b-4793393890ad
Accept: */*
Accept-Encoding: gzip, deflate, sdch
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Cookie: dadaproaffinity=14ff51cc869a14d3552485cb4ceee1faa1be7165cc5d4b0e2b19370f11afcbea
What I have tried:
Reproduce this error in local : it works fine on local server (web and SQL servers) from android app or from Postman
I check that the token was sent correctly in every requests
The request from Android is the same every time
Tried to add missing header to my android app request
I spend two days on this problem and read many stackoverflow posts but no one helps me.
Thanks for your help.
UPDATE 1 :
With Fiddler I saw that in GET request from Postman, they were a Cookie header. This cookie is sent when we ask for a bearer token.
Example of token response from the server
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 691
Content-Type: application/json;charset=UTF-8
Expires: -1
Server: Microsoft-IIS/8.5
Set-Cookie: .AspNet.Cookies=XXXX; path=/; HttpOnly
X-Powered-By: ASP.NET
X-Powered-By: ARR/2.5
Date: Tue, 31 May 2016 16:55:39 GMT
{"access_token":"XXXX","token_type":"bearer","expires_in":1209599,"userName":"Foo",".issued":"Tue, 31 May 2016 16:55:40 GMT",".expires":"Tue, 14 Jun 2016 16:55:40 GMT"}
Fiddler and Postman saved this cookie and they automatically put it in requests to API (example on the “The request from Postman” code block). When I remove the cookie from the Postman GET request, it doesn’t work (just like my android app).
Now, the question is: why WEB API 2 send a cookie instead of only using the token ? And why the token work great in the first requests and don’t work properly for the following requests ?
According to ASP.NET WebAPI2 flow you can see on the bottom of that page, it seems your requests are always authenticated but sometimes fail to get authorized.
So imo, the AuthorizationFilter[Authorize] rejects some of your requests for an unknown reason. What I would suggest is to dump the request your API receives as well as the claims identity attached to the token. Try to see if there is differences between them when you have a successful response and when you have a 401.
That way, you may be able to determine either it is your request that got malformed, if it is the claims identity that is not good or if it is the AuthorizationFilter that refuses you for another reason (like too much queries or else).
Good luck !
UPDATE 1
According to your new input, I think that your Web API is configured to use both token and cookie authentication.
What I see here is you have two solutions :
1°/ Store the returned cookie in your Android application and use it for next calls. Simplest and fastest way to solve your problem without changing all your API, but you store an authorization cookie : it can leads to security problem (CSRF attacks).
2°/ You can check how your authentication and authorization filters are set to disable cookie authentication and only rely on token authentication : it will hence forces all the requests and your API to only use token and will prevents you from suffering CSRF attacks. More complex because you have to dig into your web API configuration.
Check the following links (sorry, as I don't have enough reputation yet to post more than 2 links per post, you'll find them as text at the end of my answer) :
ASP.net Secure a Web API 2.2[2] : From the chapter "Configuring the Authorization Server" at the bottom
MSDN article on Web API security[3] : More general and technical information about web api security, how to secure it and CRSF attacks
StackOverflow .NET cookie and token authentication[4] : Check David Banister's answer, I think it is exactly what you want to do : Only use token for all your API calls.
StackOverflow Authorize filter and authentication[5] : More information about such mechanisms for your API
And finally
Cookie authentication with web API and 401 codes[6] : Sounds like your actual problem, isn't it ?
I hope it helps you, good luck !
// Links
2: www.asp.net/web-api/overview/security/individual-accounts-in-web-api
3: msdn.microsoft.com/en-us/magazine/dn201748.aspx
4: stackoverflow.com/questions/22568409/mvc-net-cookie-authenticated-system-acessing-a-web-api-with-token-authenticatio
5: stackoverflow.com/questions/21231751/authorize-filter-and-authentication
6: brockallen.com/2013/10/27/using-cookie-authentication-middleware-with-web-api-and-401-response-codes/
Finally, I got my answer:
My Web Service send a Cookie named “dadaproaffinity” the first time I ask for a request. This Cookie was automatically put on the following request by Postman but not by Android HttpUrlConnection. So, I just take this Cookie and now I just add this Cookie on every requests with the Token.
But : This cookie is send by IIS, not by my Web Service ! That’s why it works on local but not on the production server. I googled this cookie and there are very few responses about that. The only one that I find in English is :
Technical Cookie of IIS Server hosting the site.
Need to route to the correct server session, in order to keep it active
Does anyone have more information about this IIS Cookie ?
I have some images that I need to do a HttpRequestMethod.HEAD in order to find out some details of the image.
When I go to the image url on a browser it loads without a problem.
When I attempt to get the Header info via my code or via online tools it fails
An example URL is http://www.adorama.com/images/large/CHHB74P.JPG
As mentioned, I have used the online tool Hurl.It to try and attain the Head request but I am getting the same 403 Forbidden message that I am getting in my code.
I have tried adding many various headers to the Head request (User-Agent, Accept, Accept-Encoding, Accept-Language, Cache-Control, Connection, Host, Pragma, Upgrade-Insecure-Requests) but none of this seems to work.
It also fails to do a normal GET request via Hurl.it. Same 403 error.
If it is relevant, my code is a c# web service and is running on the AWS cloud (just in case the adorama servers have something against AWS that I dont know about). To test this I have also spun up an ec2 (linux box) and run curl which also returned the 403 error. Running curl locally on my personal computer returns the binary image which is presumably just the image data.
And just to remove the obvious thoughts, my code works successfully for many many other websites, it is just this one where there is an issue
Any idea what is required for me to download the image headers and not get the 403?
same problem here.
Locally it works smoothly. Doing it from an AWS instance I get the very same problem.
I thought it was a DNS resolution problem (redirecting to a malfunctioning node). I have therefore tried to specify the same IP address as it was resolved by my client but didn't fix the problem.
My guess is that Akamai (the service is provided by an Akamai CDN in this case) is blocking AWS. It is understandable somehow, customers pay by traffic for CDN, by abusing it, people can generate huge bills.
Connecting to www.adorama.com (www.adorama.com)|104.86.164.205|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 403 Forbidden
Server: **AkamaiGHost**
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 301
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 09:34:20 GMT
Connection: close
2016-03-23 09:34:20 ERROR 403: Forbidden.
I tried that URL from Amazon and it didn't work for me. wget did work from other servers that weren't on Amazon EC2 however. Here is the wget output on EC2
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:42:33-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.40.219.79
Connecting to www.adorama.com|23.40.219.79|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 403 Forbidden
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 299
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 08:42:33 GMT
Connection: close
2016-03-23 08:42:33 ERROR 403: Forbidden.
But from another Linux host it did work. Here is output
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:43:11-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.45.139.71
Connecting to www.adorama.com|23.45.139.71|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 200 OK
Content-Type: image/jpeg
Last-Modified: Wed, 23 Mar 2016 08:41:57 GMT
Server: Microsoft-IIS/8.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
ServerID: C01
Content-Length: 15131
Cache-Control: private, max-age=604800
Date: Wed, 23 Mar 2016 08:43:11 GMT
Connection: keep-alive
Set-Cookie: 1YDT=CT; expires=Wed, 20-Apr-2016 08:43:11 GMT; path=/; domain=.adorama.com
P3P: CP="NON DSP ADM DEV PSD OUR IND STP PHY PRE NAV UNI"
Length: 15131 (15K) [image/jpeg]
Saving to: \u201cCHHB74P.JPG\u201d
100%[=====================================>] 15,131 --.-K/s in 0s
2016-03-23 08:43:11 (460 MB/s) - \u201cCHHB74P.JPG\u201d saved [15131/15131]
I would guess that the image provider is deliberately blocking requests from EC2 address ranges.
The reason the wget outgoing ip address is different in the two examples is due to DNS resolution on the cdn provider that adorama are providing
Web Server may implement ways to check particular fingerprint attributes to prevent automated bots . Here a few of them they can check
Geoip, IP
Browser headers
User agents
plugin info
Browser fonts return
You may simulate the browser header and learn some fingerprinting "attributes" here : https://panopticlick.eff.org
You can try replicate how a browser behave and inject similar headers/user-agent. Plain curl/wget are not likely to satisfied those condition, even tools like phantomjs occasionally get blocked. There is a reason why some prefer tools like selenium webdriver that launch actual browser.
I found using another url also being protected by AkamaiGHost was blocking due to certain parts in the user agent. Particulary using a link with protocol was blocked:
Using curl -H 'User-Agent: some-user-agent' https://some.website I found the following results for different user agents:
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0 okay
facebookexternalhit/1.1 (+http\://www.facebook.com/externalhit_uatext.php): 403
https ://bar: okay
https://bar: 403
All I could find for now is this (downvoted) answer https://stackoverflow.com/a/48137940/230422 stating that colons (:) are not allowed in header values. That is clearly not the only thing happening here as the Mozilla example also has a colon, only not a link.
I guess that at least most webservers don't care and allow facebook's bot and other bots having a contact url in their user agent. But appearently AkamaiGHost does block it.
I'm writing a program in C++ that needs to download JSON data from an HTTPS URL. The program is based on wxWidgets. That URL is for the translation service at Glosbe
So I've tried multiple different libraries including:
libcurl
Boost.Asio
the http functionality included in wxWidgets
wxCurl
Urdl
However, it always throws an error saying it can't connect, or I get a reply that says "Moved Permanently".
When i copy and paste the URL I am testing it with into a browser, it returns the JSON data perfectly.
Does anyone know the correct way to do this?
Any help would be great!
301 Moved Permanently is what the server responds when you try to access the page with HTTP instead of HTTPS. Here's a complete response I just received from the server:
HTTP/1.1 301 Moved Permanently
Server: nginx
Date: Thu, 16 Jul 2015 20:25:01 GMT
Content-Type: text/html
Content-Length: 178
Connection: keep-alive
Location: https://en.glosbe.com/a-api
It means exactly that: "The content you are looking for is really at https://en.glosbe.com/a-api." Your browser simply adheres to the HTTP protocol by following the server's hint and automatically proceeding to request https://en.glosbe.com/a-api when you try to access http://en.glosbe.com/a-api. It works seamlessly for you as a user.
You will have to read more documentation to create HTTPS requests yourself. Each of the libraries you mentioned will have a different way of supporting HTTPS (or not support it at all). For example, have a look at http://www.boost.org/doc/libs/1_58_0/doc/html/boost_asio/overview/ssl.html, especially the "Notes" section where it says that "OpenSSL is required to make use of Boost.Asio's SSL support."
I'm developing a Django app that uses python-openid. The app is running on my development server at home.
Similar to stackoverflow's login mechanism, I'd like users to login to my website using their Google credentials.
The code I've implemented to do this, works well for a couple weeks, and then stops working. I get stuck during the login process on the following Google page: https://www.google.com/accounts/o8/ud with this message: "The page you requested is invalid." It'll randomly start working again, but fails every few weeks or so.
Going through Yahoo's login worked for months, and today has stopped working with the following message: "This page has expired, go back to the original page and please try again" on this page: https://open.login.yahooapis.com/openid/op/auth
Here is the request, as captured by LiveHttpHeaders for Google:
https://www.google.com/accounts/o8/ud
POST /accounts/o8/ud HTTP/1.1
Host: www.google.com
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.10) Gecko/2009042513 Ubuntu/8.04 (hardy) Firefox/3.0.10
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
Referer: http://127.0.0.1:8000/users/login/
Content-Length:907
openid.ax.if_available=ext1&openid.mode=checkid_setup&openid.ns=http://specs.openid.net/auth/2.0&openid.realm=http://127.0.0.1:8000/accounts/login/&openid.return_to=http://127.0.0.1:8000/users/login/finish/?janrain_nonce=2009-10-05T19%3A10%3A11ZtioiRm&openid.ax.count.ext1=unlimited&openid.ax.mode=fetch_request&openid.sreg.optional=email&openid.claimed_id=http://specs.openid.net/auth/2.0/identifier_select&openid.ns.sreg=http://openid.net/extensions/sreg/1.1&openid.ns.ax=http://openid.net/srv/ax/1.0&openid.identity=http://specs.openid.net/auth/2.0/identifier_select&openid.assoc_handle=AOQobUcnzec0bpeZRztjqPrr5TQUA0aPL7SIuOPOMgWxex2HRAP09AyJ&openid.ax.required=ext0&openid.ax.type.ext0=http://schema.openid.net/namePerson&openid.ax.type.ext1=http://schema.openid.net/contact/web/default
HTTP/1.x 400 Bad Request
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
I'm not sure what's going on here, and would love some help.
It looks like the code you are using is generating a bad URL request. The real URL is https://www.google.com/accounts/o8/id, so try to fix the "ud" at the end changing it by an "id".
Hope this helps!
you can construct the uri and redirect the user to the uri with GET method. If you doing POST google expects some headers which I think it was not mentioned docs. Check the sample request. I tried with GET without python-openid it works pretty well.
You might take a look at the redirect_uri and the state inside to see if they match. I remember having the issue of having mismatched state sometimes ago with Google Login.
Btw if you use Django, I would recommend using social-app-django which is currently active and supports multiple social login options (if at some point you consider adding more social login providers).