How to get Digest authentication right - c++

I try to write a C++-application and I have to do HTTP Digest Authentication. The problem is not primarily about C++, but about the fact, that the connection is not being established. The website I try to access is the following: httpbin.org/digest-auth/auth/user/passwd .
Consider the following server response to a simple GET /digest-auth/auth/user/passwd:
HTTP/1.1 401 UNAUTHORIZED
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Content-Type: text/html; charset=utf-8
Date: Mon, 08 Sep 2014 15:10:09 GMT
Server: gunicorn/18.0
Set-Cookie: fake=fake_value
Www-Authenticate: Digest realm="me#kennethreitz.com", nonce="2a932bfb1f9a748a7b5ee590d0cf99e0", qop=auth, opaque="2d09668631b42bff8375523e7b27e45e"
Content-Length: 0
Connection: keep-alive
A1 is then computed to be user:me#kennethreitz.com:passwd and is hashed to 4de666b60f91e2444f549243bed5fa4b which I will refer to as HA1.
A2 is computed to be GET:/digest-auth/auth/user/passwd and hashed to b44272ea65ee4af7fb26c5dba58f6863 which I will refer to as HA2.
With this information, the response is computed as HA1:nonce:1:ac3yyj:auth:HA2, where HA1 and HA2 are the values we have just computed and nonce is taken from the server response above, which is in total: 4de666b60f91e2444f549243bed5fa4b:2a932bfb1f9a748a7b5ee590d0cf99e0:1:ac3yyj:auth:b44272ea65ee4af7fb26c5dba58f6863. The hash of that is 55f292e183ead0810528bb2a13b98e00.
Combining all that information should be sufficient to establish a http-connection using digest authentication. However, the following request is declined by the server and answered with another HTTP/1.1 401.
GET /digest-auth/auth/user/passwd HTTP/1.1
Host: httpbin.org
Authorization: Digest username="user", realm="me#kennethreitz.com",nonce="2a932bfb1f9a748a7b5ee590d0cf99e0",uri="/digest-auth/auth/user/passwd",qop=auth,nc=1,cnonce="ac3yyj",response="55f292e183ead0810528bb2a13b98e00",opaque="2d09668631b42bff8375523e7b27e45e"
Note that the formatting does not show the structure of the request. The block from Authorization to opaque is actually one line.
Feel free to re-do the md5-calculation - but I have redone the calculations manually and got the same hashes as my program did. I used that tool (http://md5-hash-online.waraxe.us/) for the manual computations.
Am I missing something obvious here, probably misinterpreting the standard in a way? Why can't I get authorized?

Finally got it. The authentication itself is completely correct.
The server demands a cookie to be set. Apparently one must show that cookie in the reponse, too. That explains why firefox (being a browser) can authenticate correctly while curl and lwp-request can't albeit sticking to the standard - RFC said nothing about cookies. Why do we have standards, that noone cares for?
Anyway, appending Cookie: fake=fake_value to the header solves the problem.

Related

JMETER This site does not specify a policy in the P3P header ERROR

I am trying to hit this URL https://subdomain.example.com in JMeter and recorded using the Blazemeter Chrome extension has all the necessary config elements but get an error:
HTTP/1.1 429 Too Many Requests
Content-Type: text/html; charset=utf-8
Content-Length: 1031
Connection: keep-alive
Cache-Control: private, no-cache, no-store, must-revalidate
Date: Tue, 20 Aug 2019 01:21:35 GMT
Expires: 0
p3p: CP="This site does not specify a policy in the P3P header"
I have tried coping the Header Cookies from Browser Header Response which works for sometime but then start throwing an error
As per HTTP Status Code 429 Too Many Requests description:
The HTTP 429 Too Many Requests response status code indicates the user has sent too many requests in a given amount of time ("rate limiting").
A Retry-After header might be included to this response indicating how long to wait before making a new request.
So there are following options:
Your server is overloaded, in this case there is nothing you can do here apart from reporting the error as the bottleneck
Your script doesn't have proper correlation implemented, i.e. you're sending recorded hard-coded values instead of getting dynamic parameters
Your server doesn't allow such amount of requests from a single IP address within the given timeframe, you could try implementing IP Spoofing so your server would "think" that the requests are coming from the different machines.
Thanks for your reply. In the end I figured out that no limitation for number of calls implemented.
Now come to answer this is how I managed to work this:
Opened the page in chrome and from the header section copied all the header elements into the header manager hard coded.
First time it fails and returns p3p: CP="This site does not specify a policy in the P3P header" but also return the update variable value needed for next request which I extract and used in the next and subsequent Requests. The way I was able to find out which variable is changing by using the string comparison of 2 Response Headers
This was a difficult one but somehow worked with very minor change I also added the Header Manager to each request for safer side.

MS Edge dropping cookie

We're experiencing a strange behaviour on MS Edge with at least versions 16/17. The same issue does not happen on IE 11/Chrome/Firefox.
Our users are authenticated via a session cookie. The cookie name is "app". Other cookies involve storing a the current display language with a cookie name of "prefLang". The cookies are HttpOnly, secure and set only for the actual subdomain in use.
In some cases, MS Edge simply drops the session cookie named "app" and is not part of the HTTP request any longer. The mentioned "prefLang" cookie is not dropped and is visible on the server.
The dropping occures is not easily reproducible but has been visible sometimes when:
The user opens an externally linked page from the loggedin app context with target=_blank
More than 12mins have been passed between page requests within the app context
immediately within seconds of page requests
the user opens an iframe with a page request from the same origin
Example with request/response debug information server side where the session cookie app=redactedABC is not transmitted to the server
2018-11-28 09:44:00 UTC POST hasIdentity: 1 UserId: <redacted> Request: https://app.domain.com/page/action/full/add/0 / Cookie: _gid=redacted; _ga=redacted; app=redacted-ABC; prefLang=de
2018-11-28 09:44:00 UTC 200 hasIdentity: 0 UserId: 0 Response headers: Array
(
[0] => Expires: Thu, 19 Nov 1981 08:52:00 GMT
[1] => Cache-Control: no-store, no-cache, must-revalidate
[2] => Pragma: no-cache
)
2018-11-28 09:46:21 UTC POST hasIdentity: 0 UserId: 0 Request: https://app.domain.com/page/action/full/add/0 / Cookie: _gid=redacted; _ga=redacted; prefLang=de
2018-11-28 09:46:21 UTC 302 hasIdentity: 0 UserId: 0 Response headers: Array
(
[0] => Expires: Thu, 19 Nov 1981 08:52:00 GMT
[1] => Cache-Control: no-store, no-cache, must-revalidate
[2] => Pragma: no-cache
[3] => Set-Cookie: app=redactedXYZ; path=/; domain=app.domain.com; secure; HttpOnly
)
2018-11-28 09:46:21 UTC GET hasIdentity: 0 UserId: 0 Request: https://app.domain.com/account/login / Cookie: _gid=redacted; _ga=redacted; prefLang=de; app=redactedXYZ
2018-11-28 09:46:21 UTC 200 hasIdentity: 0 UserId: 0 Response headers: Array
(
[0] => Expires: Thu, 19 Nov 1981 08:52:00 GMT
[1] => Cache-Control: no-store, no-cache, must-revalidate
[2] => Pragma: no-cache
)
I have so many questions and thoughts that it will be too long for a comment :
When you say it's ok in other browsers, is it based on some test cases or Edge is only one of the many browsers usually employed ?
Have you managed to check if the cookie is still been registered client side before/after the faulty request ? Have you check the request headers client side ? The question is about Edge forgetting the cookie itself, Edge forgetting to send the cookie, Edge sending a bad cookie. Have you also tried to tweak a bit the cookie name and content (remove domain for instance if feasible in your subdomain context) ?
What about server side ? Load balancers can be an explanation. Session storage could also be a clue (quite low chance though except if the "no returned session cookie" request is the consequence of a previous unlogged invalid cookie kicking). Of course, server side investigations have no sense if others browsers are running flawlessly in a significant number.
How are you running the client side app and performing requests ? Ajax or fetch requests alongside full document loading (your URI's are looking very APIsh) ? Have you noticed a link between request mode and issue ?
Unavailability of the cookie data within app client parts can also be a guideline (a service worker that can't access cookies awakening on a request upon seldom met conditions for instance). Edge can also be faulty with cookie sent back with Ajax in local files (an awful app way but I've seen so much weird things).
With informations you've provided, very few of these points seems able to produce such an inconsistent behavior except if mixed in some bloody Edge-sensitive potion. Anyway, the answers may help to focus the issue and define a more reproductible context.
Aside this, I've found a 2-years old thread talking about a very, very, very, similar issue, still active and looking unsolved, for... IE11 (sorry). It's related to session cookie drops when accessed from different browser's processes (like tabs or iframe). I've found nothing about this issue for Edge and I believe that most of the engine have been rewritten, but perhaps you managed to find the haunted section (though you're saying that all is fine on IE11) ?
If you agree, you'd better edit your own question with related relevant points so I can delete this answer that is not a real answer.

URL forbidden 403 when using a tool but fine from browser

I have some images that I need to do a HttpRequestMethod.HEAD in order to find out some details of the image.
When I go to the image url on a browser it loads without a problem.
When I attempt to get the Header info via my code or via online tools it fails
An example URL is http://www.adorama.com/images/large/CHHB74P.JPG
As mentioned, I have used the online tool Hurl.It to try and attain the Head request but I am getting the same 403 Forbidden message that I am getting in my code.
I have tried adding many various headers to the Head request (User-Agent, Accept, Accept-Encoding, Accept-Language, Cache-Control, Connection, Host, Pragma, Upgrade-Insecure-Requests) but none of this seems to work.
It also fails to do a normal GET request via Hurl.it. Same 403 error.
If it is relevant, my code is a c# web service and is running on the AWS cloud (just in case the adorama servers have something against AWS that I dont know about). To test this I have also spun up an ec2 (linux box) and run curl which also returned the 403 error. Running curl locally on my personal computer returns the binary image which is presumably just the image data.
And just to remove the obvious thoughts, my code works successfully for many many other websites, it is just this one where there is an issue
Any idea what is required for me to download the image headers and not get the 403?
same problem here.
Locally it works smoothly. Doing it from an AWS instance I get the very same problem.
I thought it was a DNS resolution problem (redirecting to a malfunctioning node). I have therefore tried to specify the same IP address as it was resolved by my client but didn't fix the problem.
My guess is that Akamai (the service is provided by an Akamai CDN in this case) is blocking AWS. It is understandable somehow, customers pay by traffic for CDN, by abusing it, people can generate huge bills.
Connecting to www.adorama.com (www.adorama.com)|104.86.164.205|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.1 403 Forbidden
Server: **AkamaiGHost**
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 301
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 09:34:20 GMT
Connection: close
2016-03-23 09:34:20 ERROR 403: Forbidden.
I tried that URL from Amazon and it didn't work for me. wget did work from other servers that weren't on Amazon EC2 however. Here is the wget output on EC2
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:42:33-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.40.219.79
Connecting to www.adorama.com|23.40.219.79|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 403 Forbidden
Server: AkamaiGHost
Mime-Version: 1.0
Content-Type: text/html
Content-Length: 299
Cache-Control: max-age=604800
Date: Wed, 23 Mar 2016 08:42:33 GMT
Connection: close
2016-03-23 08:42:33 ERROR 403: Forbidden.
But from another Linux host it did work. Here is output
wget -S http://www.adorama.com/images/large/CHHB74P.JPG
--2016-03-23 08:43:11-- http://www.adorama.com/images/large/CHHB74P.JPG
Resolving www.adorama.com... 23.45.139.71
Connecting to www.adorama.com|23.45.139.71|:80... connected.
HTTP request sent, awaiting response...
HTTP/1.0 200 OK
Content-Type: image/jpeg
Last-Modified: Wed, 23 Mar 2016 08:41:57 GMT
Server: Microsoft-IIS/8.5
X-AspNet-Version: 2.0.50727
X-Powered-By: ASP.NET
ServerID: C01
Content-Length: 15131
Cache-Control: private, max-age=604800
Date: Wed, 23 Mar 2016 08:43:11 GMT
Connection: keep-alive
Set-Cookie: 1YDT=CT; expires=Wed, 20-Apr-2016 08:43:11 GMT; path=/; domain=.adorama.com
P3P: CP="NON DSP ADM DEV PSD OUR IND STP PHY PRE NAV UNI"
Length: 15131 (15K) [image/jpeg]
Saving to: \u201cCHHB74P.JPG\u201d
100%[=====================================>] 15,131 --.-K/s in 0s
2016-03-23 08:43:11 (460 MB/s) - \u201cCHHB74P.JPG\u201d saved [15131/15131]
I would guess that the image provider is deliberately blocking requests from EC2 address ranges.
The reason the wget outgoing ip address is different in the two examples is due to DNS resolution on the cdn provider that adorama are providing
Web Server may implement ways to check particular fingerprint attributes to prevent automated bots . Here a few of them they can check
Geoip, IP
Browser headers
User agents
plugin info
Browser fonts return
You may simulate the browser header and learn some fingerprinting "attributes" here : https://panopticlick.eff.org
You can try replicate how a browser behave and inject similar headers/user-agent. Plain curl/wget are not likely to satisfied those condition, even tools like phantomjs occasionally get blocked. There is a reason why some prefer tools like selenium webdriver that launch actual browser.
I found using another url also being protected by AkamaiGHost was blocking due to certain parts in the user agent. Particulary using a link with protocol was blocked:
Using curl -H 'User-Agent: some-user-agent' https://some.website I found the following results for different user agents:
Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:70.0) Gecko/20100101 Firefox/70.0 okay
facebookexternalhit/1.1 (+http\://www.facebook.com/externalhit_uatext.php): 403
https ://bar: okay
https://bar: 403
All I could find for now is this (downvoted) answer https://stackoverflow.com/a/48137940/230422 stating that colons (:) are not allowed in header values. That is clearly not the only thing happening here as the Mozilla example also has a colon, only not a link.
I guess that at least most webservers don't care and allow facebook's bot and other bots having a contact url in their user agent. But appearently AkamaiGHost does block it.

Azure list blob operation is failing with error 403 "AuthenticationError"

We are working on product which uses Azure storage service for storing data.
We are using Azure REST API through C++ to communicate with Azure. We are using cURL to execute REST request.
Right now, we are working on functionality to list blobs, but its failing with error
<?xml version="1.0" encoding="utf-8"?>
<Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate
the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:16cd7e3d-0001-0032-2dd6-6f2e4f000000
Time:2016-02-25T14:14:23.2377982Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request 'CyPhz
sBdBCRRg2w157IYY4sIB23XwzKsfdAaUTVCAts=' is not the same as any computed signature. Server used following string to sign
: 'GET
x-ms-date:Thu, 25 Feb 2016 14:16:20 GMT
x-ms-version:2015-02-21
/sevenstars/container2
comp:list
delimiter:/
maxresults:2
restype:container'
</AuthenticationErrorDetail></Error>
======================
Following is the wireshark output that we observed
GET /container2?comp=list&delimiter=/&maxresults=2&restype=container HTTP/1.1
Host: sevenstars.blob.core.windows.net
Accept: */*
x-ms-date:Thu, 25 Feb 2016 14:16:20 GMT
x-ms-version:2015-02-21
Authorization:SharedKey sevenstars:CyPhzsBdBCRRg2w157IYY4sIB23XwzKsfdAaUTVCAts=
HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
Content-Length: 704
Content-Type: application/xml
Server: Microsoft-HTTPAPI/2.0
x-ms-request-id: 16cd7e3d-0001-0032-2dd6-6f2e4f000000
Date: Thu, 25 Feb 2016 14:14:22 GMT
...<?xml version="1.0" encoding="utf-8"?><Error><Code>AuthenticationFailed</Code><Message>Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
RequestId:16cd7e3d-0001-0032-2dd6-6f2e4f000000
Time:2016-02-25T14:14:23.2377982Z</Message><AuthenticationErrorDetail>The MAC signature found in the HTTP request 'CyPhzsBdBCRRg2w157IYY4sIB23XwzKsfdAaUTVCAts=' is not the same as any computed signature. Server used following string to sign: 'GET
======================
As per suggestions on Microsoft forum. I ensured all parameters are set correctly. (: is used instead of = in string to sign)
Can you please let us know that how can we resolve this issue?
Your help is much appreciated.
Thanks and regards
Rahul Naik

Use libCurl with Bluecoat cookie proxy

I am trying to connect through a Bluecoat proxy which uses a cookie during the proxy authentication.
I have been completely unable to find a combination of CURLOPT_ settings that will get CURL to present the cookie during proxy authentication.
So: the proxy responds with:
HTTP/1.1 407 Proxy Authentication Required
Proxy-Authenticate: NTLM
Cache-Control: no-cache
Pragma: no-cache
Content-Type: text/html; charset=utf-8
Proxy-Connection: close
Set-Cookie: BCSI-CS-EDD688431754D715=2; Path=/
Connection: close
Content-Length: 825
But curl does not present the cookie in subsequent authentication attempts, no matter what I set for CURLOPT_COOKIEFILE or CURLOPT_COOKIEJAR.
NOTE: I am also using (because I must)
CURLOPT_PROXYTYPE = CURLPROXY_HTTP
CURLOPT_PROXYAUTH = CURLAUTH_ANY
CURLOPT_HTTPPROXYTUNNEL = 1
CURLOPT_CONNECT_ONLY = 1
Is it reasonable to expect CURL to present a cookie with a Proxy-Authorization request?
I am using curl_easy_*, would moving to the multi interface help?
Finally, I am building with 7.19.7
The CONNECT request is done a bit separately in the code than the "regular" requests and it seems there's no cookie handling done there! I consider it a libcurl bug.
(This is my comment from above, turned into a proper answer.)
It is possible to create a tunnel through a Blue Coat Proxy. But my advice is not to use a network with the Blue Coat Proxy. In a free country it should not be a problem to buy a SIM card and use a mobile network instead.
Read more at https://bluecoatproxy.wordpress.com