Use libCurl with Bluecoat cookie proxy - cookies

I am trying to connect through a Bluecoat proxy which uses a cookie during the proxy authentication.
I have been completely unable to find a combination of CURLOPT_ settings that will get CURL to present the cookie during proxy authentication.
So: the proxy responds with:
HTTP/1.1 407 Proxy Authentication Required
Proxy-Authenticate: NTLM
Cache-Control: no-cache
Pragma: no-cache
Content-Type: text/html; charset=utf-8
Proxy-Connection: close
Set-Cookie: BCSI-CS-EDD688431754D715=2; Path=/
Connection: close
Content-Length: 825
But curl does not present the cookie in subsequent authentication attempts, no matter what I set for CURLOPT_COOKIEFILE or CURLOPT_COOKIEJAR.
NOTE: I am also using (because I must)
CURLOPT_PROXYTYPE = CURLPROXY_HTTP
CURLOPT_PROXYAUTH = CURLAUTH_ANY
CURLOPT_HTTPPROXYTUNNEL = 1
CURLOPT_CONNECT_ONLY = 1
Is it reasonable to expect CURL to present a cookie with a Proxy-Authorization request?
I am using curl_easy_*, would moving to the multi interface help?
Finally, I am building with 7.19.7

The CONNECT request is done a bit separately in the code than the "regular" requests and it seems there's no cookie handling done there! I consider it a libcurl bug.
(This is my comment from above, turned into a proper answer.)

It is possible to create a tunnel through a Blue Coat Proxy. But my advice is not to use a network with the Blue Coat Proxy. In a free country it should not be a problem to buy a SIM card and use a mobile network instead.
Read more at https://bluecoatproxy.wordpress.com

Related

Error 401 on WEB API 2 when there is lot of request from Android device

I’m developing an Android App and a Web Service that communicate. My Web Service is in WEB API 2 with token bearer authentication.
My problem is that when I send too many requests (~20 request in 15 seconds) to my Web Service from my Android App, the WS response with
“401” : “Authorization has been denied for this request”
This happen ONLY on the production server (Amen hoster) AND from the Android Device. For example, if I try with Postman, everything works fine. So it’s related to my production server and/or my android app request.
The code for access to the Web Service
URL obj = new URL(SERVEUR_URL + url);
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
con.setRequestMethod("GET");
con.setRequestProperty("Authorization", "Bearer " + token);
con.setRequestProperty("Content-Type", "application/json");
int responseCode = con.getResponseCode();
String responseMessage = con.getResponseMessage();
The authentication provider on my Web Service is the default one. No modifications.
The request from my Android App (not work every time)
GET http://api.xxxx.com/api/Weesps/GetAvailableWeesps HTTP/1.1
Authorization: Bearer XXXX
Content-Type: application/json
User-Agent: Dalvik/2.1.0 (Linux; U; Android 6.0; Google Nexus 5X - 6.0.0 - API 23 - 1080x1920 Build/MRA58K)
Host: api.xxxx.com
Connection: Keep-Alive
Accept-Encoding: gzip
The request from Postman (work every time)
GET http://api.xxxx.com/api/Weesps/GetAvailableWeesps HTTP/1.1
Host: api.xxxx.com
Connection: keep-alive
Authorization: Bearer XXXX
Cache-Control: no-cache
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36
Postman-Token: bca55154-775d-9709-7a8b-4793393890ad
Accept: */*
Accept-Encoding: gzip, deflate, sdch
Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
Cookie: dadaproaffinity=14ff51cc869a14d3552485cb4ceee1faa1be7165cc5d4b0e2b19370f11afcbea
What I have tried:
Reproduce this error in local : it works fine on local server (web and SQL servers) from android app or from Postman
I check that the token was sent correctly in every requests
The request from Android is the same every time
Tried to add missing header to my android app request
I spend two days on this problem and read many stackoverflow posts but no one helps me.
Thanks for your help.
UPDATE 1 :
With Fiddler I saw that in GET request from Postman, they were a Cookie header. This cookie is sent when we ask for a bearer token.
Example of token response from the server
HTTP/1.1 200 OK
Cache-Control: no-cache
Pragma: no-cache
Content-Length: 691
Content-Type: application/json;charset=UTF-8
Expires: -1
Server: Microsoft-IIS/8.5
Set-Cookie: .AspNet.Cookies=XXXX; path=/; HttpOnly
X-Powered-By: ASP.NET
X-Powered-By: ARR/2.5
Date: Tue, 31 May 2016 16:55:39 GMT
{"access_token":"XXXX","token_type":"bearer","expires_in":1209599,"userName":"Foo",".issued":"Tue, 31 May 2016 16:55:40 GMT",".expires":"Tue, 14 Jun 2016 16:55:40 GMT"}
Fiddler and Postman saved this cookie and they automatically put it in requests to API (example on the “The request from Postman” code block). When I remove the cookie from the Postman GET request, it doesn’t work (just like my android app).
Now, the question is: why WEB API 2 send a cookie instead of only using the token ? And why the token work great in the first requests and don’t work properly for the following requests ?
According to ASP.NET WebAPI2 flow you can see on the bottom of that page, it seems your requests are always authenticated but sometimes fail to get authorized.
So imo, the AuthorizationFilter[Authorize] rejects some of your requests for an unknown reason. What I would suggest is to dump the request your API receives as well as the claims identity attached to the token. Try to see if there is differences between them when you have a successful response and when you have a 401.
That way, you may be able to determine either it is your request that got malformed, if it is the claims identity that is not good or if it is the AuthorizationFilter that refuses you for another reason (like too much queries or else).
Good luck !
UPDATE 1
According to your new input, I think that your Web API is configured to use both token and cookie authentication.
What I see here is you have two solutions :
1°/ Store the returned cookie in your Android application and use it for next calls. Simplest and fastest way to solve your problem without changing all your API, but you store an authorization cookie : it can leads to security problem (CSRF attacks).
2°/ You can check how your authentication and authorization filters are set to disable cookie authentication and only rely on token authentication : it will hence forces all the requests and your API to only use token and will prevents you from suffering CSRF attacks. More complex because you have to dig into your web API configuration.
Check the following links (sorry, as I don't have enough reputation yet to post more than 2 links per post, you'll find them as text at the end of my answer) :
ASP.net Secure a Web API 2.2[2] : From the chapter "Configuring the Authorization Server" at the bottom
MSDN article on Web API security[3] : More general and technical information about web api security, how to secure it and CRSF attacks
StackOverflow .NET cookie and token authentication[4] : Check David Banister's answer, I think it is exactly what you want to do : Only use token for all your API calls.
StackOverflow Authorize filter and authentication[5] : More information about such mechanisms for your API
And finally
Cookie authentication with web API and 401 codes[6] : Sounds like your actual problem, isn't it ?
I hope it helps you, good luck !
// Links
2: www.asp.net/web-api/overview/security/individual-accounts-in-web-api
3: msdn.microsoft.com/en-us/magazine/dn201748.aspx
4: stackoverflow.com/questions/22568409/mvc-net-cookie-authenticated-system-acessing-a-web-api-with-token-authenticatio
5: stackoverflow.com/questions/21231751/authorize-filter-and-authentication
6: brockallen.com/2013/10/27/using-cookie-authentication-middleware-with-web-api-and-401-response-codes/
Finally, I got my answer:
My Web Service send a Cookie named “dadaproaffinity” the first time I ask for a request. This Cookie was automatically put on the following request by Postman but not by Android HttpUrlConnection. So, I just take this Cookie and now I just add this Cookie on every requests with the Token.
But : This cookie is send by IIS, not by my Web Service ! That’s why it works on local but not on the production server. I googled this cookie and there are very few responses about that. The only one that I find in English is :
Technical Cookie of IIS Server hosting the site.
Need to route to the correct server session, in order to keep it active
Does anyone have more information about this IIS Cookie ?

URL forbidden 403 when using a tool but fine from browser

I have some images that I need to do a HttpRequestMethod.HEAD in order to find out some details of the image.
When I go to the image url on a browser it loads without a problem.
When I attempt to get the Header info via my code or via online tools it fails
An example URL is http://www.adorama.com/images/large/CHHB74P.JPG
As mentioned, I have used the online tool Hurl.It to try and attain the Head request but I am getting the same 403 Forbidden message that I am getting in my code.
I have tried adding many various headers to the Head request (User-Agent, Accept, Accept-Encoding, Accept-Language, Cache-Control, Connection, Host, Pragma, Upgrade-Insecure-Requests) but none of this seems to work.
It also fails to do a normal GET request via Hurl.it. Same 403 error.
If it is relevant, my code is a c# web service and is running on the AWS cloud (just in case the adorama servers have something against AWS that I dont know about). To test this I have also spun up an ec2 (linux box) and run curl which also returned the 403 error. Running curl locally on my personal computer returns the binary image which is presumably just the image data.
And just to remove the obvious thoughts, my code works successfully for many many other websites, it is just this one where there is an issue
Any idea what is required for me to download the image headers and not get the 403?
same problem here.
Locally it works smoothly. Doing it from an AWS instance I get the very same problem.
I thought it was a DNS resolution problem (redirecting to a malfunctioning node). I have therefore tried to specify the same IP address as it was resolved by my client but didn't fix the problem.
My guess is that Akamai (the service is provided by an Akamai CDN in this case) is blocking AWS. It is understandable somehow, customers pay by traffic for CDN, by abusing it, people can generate huge bills.
Connecting to www.adorama.com (www.adorama.com)|104.86.164.205|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 403 Forbidden
Server: **AkamaiGHost**
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 301
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 09:34:20 GMT
Connection: close
2016-03-23 09:34:20 ERROR 403: Forbidden.
I tried that URL from Amazon and it didn't work for me. wget did work from other servers that weren't on Amazon EC2 however. Here is the wget output on EC2
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:42:33-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.40.219.79
Connecting to www.adorama.com|23.40.219.79|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 403 Forbidden
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 299
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 08:42:33 GMT
Connection: close
2016-03-23 08:42:33 ERROR 403: Forbidden.
But from another Linux host it did work. Here is output
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:43:11-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.45.139.71
Connecting to www.adorama.com|23.45.139.71|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 200 OK
Content-Type: image/jpeg
Last-Modified: Wed, 23 Mar 2016 08:41:57 GMT
Server: Microsoft-IIS/8.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
ServerID: C01
Content-Length: 15131
Cache-Control: private, max-age=604800
Date: Wed, 23 Mar 2016 08:43:11 GMT
Connection: keep-alive
Set-Cookie: 1YDT=CT; expires=Wed, 20-Apr-2016 08:43:11 GMT; path=/; domain=.adorama.com
P3P: CP="NON DSP ADM DEV PSD OUR IND STP PHY PRE NAV UNI"
Length: 15131 (15K) [image/jpeg]
Saving to: \u201cCHHB74P.JPG\u201d
100%[=====================================>] 15,131 --.-K/s in 0s
2016-03-23 08:43:11 (460 MB/s) - \u201cCHHB74P.JPG\u201d saved [15131/15131]
I would guess that the image provider is deliberately blocking requests from EC2 address ranges.
The reason the wget outgoing ip address is different in the two examples is due to DNS resolution on the cdn provider that adorama are providing
Web Server may implement ways to check particular fingerprint attributes to prevent automated bots . Here a few of them they can check
Geoip, IP
Browser headers
User agents
plugin info
Browser fonts return
You may simulate the browser header and learn some fingerprinting "attributes" here : https://panopticlick.eff.org
You can try replicate how a browser behave and inject similar headers/user-agent. Plain curl/wget are not likely to satisfied those condition, even tools like phantomjs occasionally get blocked. There is a reason why some prefer tools like selenium webdriver that launch actual browser.
I found using another url also being protected by AkamaiGHost was blocking due to certain parts in the user agent. Particulary using a link with protocol was blocked:
Using curl -H 'User-Agent: some-user-agent' https://some.website I found the following results for different user agents:
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0 okay
facebookexternalhit/1.1 (+http\://www.facebook.com/externalhit_uatext.php): 403
https ://bar: okay
https://bar: 403
All I could find for now is this (downvoted) answer https://stackoverflow.com/a/48137940/230422 stating that colons (:) are not allowed in header values. That is clearly not the only thing happening here as the Mozilla example also has a colon, only not a link.
I guess that at least most webservers don't care and allow facebook's bot and other bots having a contact url in their user agent. But appearently AkamaiGHost does block it.

How to get Digest authentication right

I try to write a C++-application and I have to do HTTP Digest Authentication. The problem is not primarily about C++, but about the fact, that the connection is not being established. The website I try to access is the following: httpbin.org/digest-auth/auth/user/passwd .
Consider the following server response to a simple GET /digest-auth/auth/user/passwd:
HTTP/1.1 401 UNAUTHORIZED
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: text/html; charset=utf-8
Date: Mon, 08 Sep 2014 15:10:09 GMT
Server: gunicorn/18.0
Set-Cookie: fake=fake_value
Www-Authenticate: Digest realm="me#kennethreitz.com", nonce="2a932bfb1f9a748a7b5ee590d0cf99e0", qop=auth, opaque="2d09668631b42bff8375523e7b27e45e"
Content-Length: 0
Connection: keep-alive
A1 is then computed to be user:me#kennethreitz.com:passwd and is hashed to 4de666b60f91e2444f549243bed5fa4b which I will refer to as HA1.
A2 is computed to be GET:/digest-auth/auth/user/passwd and hashed to b44272ea65ee4af7fb26c5dba58f6863 which I will refer to as HA2.
With this information, the response is computed as HA1:nonce:1:ac3yyj:auth:HA2, where HA1 and HA2 are the values we have just computed and nonce is taken from the server response above, which is in total: 4de666b60f91e2444f549243bed5fa4b:2a932bfb1f9a748a7b5ee590d0cf99e0:1:ac3yyj:auth:b44272ea65ee4af7fb26c5dba58f6863. The hash of that is 55f292e183ead0810528bb2a13b98e00.
Combining all that information should be sufficient to establish a http-connection using digest authentication. However, the following request is declined by the server and answered with another HTTP/1.1 401.
GET /digest-auth/auth/user/passwd HTTP/1.1
Host: httpbin.org
Authorization: Digest username="user", realm="me#kennethreitz.com",nonce="2a932bfb1f9a748a7b5ee590d0cf99e0",uri="/digest-auth/auth/user/passwd",qop=auth,nc=1,cnonce="ac3yyj",response="55f292e183ead0810528bb2a13b98e00",opaque="2d09668631b42bff8375523e7b27e45e"
Note that the formatting does not show the structure of the request. The block from Authorization to opaque is actually one line.
Feel free to re-do the md5-calculation - but I have redone the calculations manually and got the same hashes as my program did. I used that tool (http://md5-hash-online.waraxe.us/) for the manual computations.
Am I missing something obvious here, probably misinterpreting the standard in a way? Why can't I get authorized?
Finally got it. The authentication itself is completely correct.
The server demands a cookie to be set. Apparently one must show that cookie in the reponse, too. That explains why firefox (being a browser) can authenticate correctly while curl and lwp-request can't albeit sticking to the standard - RFC said nothing about cookies. Why do we have standards, that noone cares for?
Anyway, appending Cookie: fake=fake_value to the header solves the problem.

set-cookie header not working

I'm developing a small site w/ Go and I'm trying to set a cookie from my server.
I'm running the server on localhost, with 127.0.0.1 aliased to subdomain-dev.domain.com on port 5080.
My When I receive the response for my POST to subdomain-dev.domain.com:5080/login I can see the set-cookie header. The response looks like this:
HTTP/1.1 307 Temporary Redirect
Location: /
Set-Cookie: myappcookie=encryptedvalue==; Path=/; Expires=Fri, 13 Sep 2013 21:12:12 UTC; Max-Age=900; HttpOnly; Secure
Content-Type: text/plain; charset=utf-8
Content-Length: 0
Date: Fri, 13 Sep 2013 20:57:12 GMT
Why isn't Chrome or Firefox recording this? In Chrome it doesn't show up in the Resources tab. In FF I can't see it either. And in neither do I see it in future Request headers.
See that Secure string in the cookie?
Yeah, me too. But only after a few hours.
Make sure you're accessing your site by SSL (https:// at the beginning of the URL) if you've got the Secure flag set.
If you're developing locally and don't have a cert, make sure you skip that option.
In my case, I had to add this to my response:
access-control-expose-headers: Set-Cookie
I found here that my Set-Cookie header was not accessible to my client unless I added it to the exposed-header header.
Hope this can help someone!
Found related github issue response cookies not being sent that helped.
In my case I am running react app under https (with mkcert tool) and making cross origin fetch request and get response. Cookies of the response is not set until I
specify credentials: 'include' for fetch request
example fetch api
fetch('https://example.com', {
credentials: 'include'
});
Specify these response headers from server
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: https://localhost:3000
Access-Control-Allow-Origin header has value of the url of my react app.
add these attributes of Set-Cookie Header Path=/; HttpOnly; Secure; SameSite=None using http cookies
Hope it helps someone!

HTTP sending response to OPTIONS request [C]

Getting Response is null error while receiving HTTP response.
I am developing an sample small HTTP server in C using row sockets.
There are actually 2 servers in my application one is standard Apache server which I am using for serving HTML pages and my small server will respond to only XMLHttpRequest sent from the Javascript within the HTML pages.
I am sending request from JavaScript as follows:
var sendReq = new XMLHttpRequest();
endReq.open("POST", "http://localhost:10000/", true);
sendReq.setRequestHeader('Content-Type','application/x-www-form-urlencoded');
sendReq.onreadystatechange = handleResult;
var param = "REQUEST_TYPE=2002&userName=" + userName.value;
param += "&password=" + password.value;
sendReq.send(param);
When I send this request I receive following Request in my server code:
OPTIONS / HTTP/1.1
Host: localhost:10000
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.3) Gecko/20100423 Ubuntu/10.04 (lucid) Firefox/3.6.3
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 115
Connection: keep-alive
Origin: http://localhost:7777
Access-Control-Request-Method: POST
I have replied to this Request as follows using socket write function:
HTTP/1.1 200 OK\n
Access-Control-Allow-Origin: *\n
Server: PSL/1.0 (Unix) (Ubuntu/Linux)\n
Access-Control-Allow-Methods: POST, GET, OPTIONS\n
Accept-Ranges: bytes\n
Content-Length: 438\nConnection: close\n
Content-Type: text/html; charset=UTF-8\n\n
I don`t know What should be the HTTP actual response to be sent on request of OPTIONS.
After this I get my Actual POST request that I have sent from JavaScript and then I respond back with
HTTP/1.1 200 OK\n\n
And then at the browser end get error Response is null.
So how to send headers/data as HTTP Response using row sockets in 'C' and how to respond to OPTIONS request. Can someone explain me by giving some example?
It's hard to understand your question, but I believe you are pointing to this as the response giving you trouble:
HTTP/1.1 200 OK\n\n
You should be including other fields, especially the Content-Length and Content-Type. If you're going to build your own HTTP server, then you should review the protocol specifications.
That said, it's not at all clear why you need to replace the HTTP server instead of using either CGI or another server side language (PHP, Java, etc). This is significantly reducing your portability and maintainability.
Finally, you appear to be transmitting the password in the request. Make sure that this is only done over some kind of encrypted (HTTPS) or else physically secured connection.
I'm not sure what you're asking, but you might find the following useful:
HTTP Made Really Easy
HTTP/1.1 rfc2616.txt
MAMA - Opera Developer Community
I found them all quite useful when I was writing a HTTP client.
This problem had occured as after processing the OPTIONS request by our server, any subsequent requests made, for some reason, were required to be responded back with "Access-Control-Allow-Origin: *" along with other normal headers and response body.
After providing this line in our responses, I always got the desired responseText/responseXML in my javascript.