I'm using PolarProxy for decrypting HTTPS requests.
When I look at intercepted response from Recaptcha, I see pretty strange payload
enter image description here
It is HTTP2 protocol.
As you can see, cookies are the same, it means we a looking at the same requests, but response in Wireshark is completely different from response in browser. Why?
Does anyone know how to intercept that response from Recaptcha?
I want to do it in C++ (libpcap) in future, but first I want to do it manually.
Related
I am trying to make use of functional url in case of mono lambda function, I have created a functional url with no security.
URL was created successfully, but Not able to hit that url using postman. So I use chrome web browser to hit my url(Get request). But the problem was whenever I hit the url, My function gets executed twice.
If anyone have faced same issue, Please assist.
There are two possibilities I can think off-
Chrome/browser sending another request for favicon.png
If you have configuration on server side that enforce HTTP to HTTPS conversion of the request, like re-direct to enforce SSL connection. In that case as well, browser send one request HTTP and redirect request to HTTPS. e.g when you hit- http://example.org, if it enforce the https, then again browser send another request to https://example.org.
You need to check possibilities here using network trouble shooting. Hope this will help you!
I've run into a few problems with setting cookies, and based on the reading I've done, this should work, so I'm probably missing something important.
This situation:
Previously I received responses from my API and used JavaScript to save them as cookies, but then I found that using the set-cookie response header is more secure in a lot of situations.
I have 2 cookies: "nuser" (contains a username) and key (contains a session key). nuser shouldn't be httpOnly so that JavaScript can access it. Key should be httpOnly to prevent rogue scripts from stealing a user's session. Also, any request from the client to my API should contain the cookies.
The log-in request
Here's my current implementation: I make a request to my login api at localhost:8080/login/login (keep in mind that the web-client is hosted on localhost:80, but based on what I've read, port numbers shouldn't matter for cookies)
First the web-browser will make an OPTIONS request to confirm that all the headers are allowed. I've made sure that the server response includes access-control-allow-credentials to alert the browser that it's okay to store cookies.
Once it's received the OPTIONS request, the browser makes the actual POST request to the login API. It sends back the set-cookie header and everything looks good at this point.
The Problems
This set-up yields 2 problems. Firstly, though the nuser cookie is not httpOnly, I don't seem to be able to access it via JavaScript. I'm able to see nuser in my browser's cookie option menu, but document.cookie yeilds "".
Secondly, the browser seems to only place the Cookie request header in requests to the exact same API (the login API):
But, if I do a request to a different API that's still on my localhost server, the cookie header isn't present:
Oh, and this returns a 406 just because my server is currently configured to do that if the user isn't validated. I know that this should probably be 403, but the thing to focus on in this image is the fact that the "cookie" header isn't included among the request headers.
So, I've explained my implementation based on my current understanding of cookies, but I'm obviously missing something. Posting exactly what the request and response headers should look like for each task would be greatly appreciated. Thanks.
Okay, still not exactly what was causing the problem with this specific case, but I updated my localhost:80 server to accept api requests, then do a subsequent request to localhost:8080 to get the proper information. Because the set-cookie header is being set by localhost:80 (the client's origin), everything worked fine. From my reading before, I thought that ports didn't matter, but apparently they do.
We are trying to build a REST interface that allows users to test the existence of a specific resource. Let's assume we're selling domain names: the user needs to determine if the domain is available.
An HTTP GET combined with 200 and 404 response codes seems sensible at first glance.
The problem we have is discriminating between a request successfully served by our lookup service, and a request served under exceptional behaviour from other components. For example:
404 and 200 can be returned by intermediary proxies that actually block the request. This can be due to proxy misconfiguration, or even external infrastructure such as coffee shop Wifi using poor forms-based authentication.
Clients could be using broken URLs. This could occur through deprecation or (again) by misconfiguration. We could combat the former through 301, however.
What is the current best practice for discriminating between responses that have been successfully fulfilled against the client's intention for that request, and responses served through exceptional behaviour?
The problem is eliminated by tunnelling responses through the response body, as we can ensure these are unique to our service. However, doesn't seem very RESTful!
Simply have your application add some content to its HTTP responses that will distinguish them from the responses thrown by intermediaries. Any or all of these would work:
Information about the error in the response content that is recognizable as your application's content (for example, Application error: Domain name not found (404))
A Content-Type header in the response that indicates that the response content should be decoded as an application error (for example, Content-Type: application/vnd.domain-finder.error+json)
A custom header in the response that indicates it is an application error
Once you implement a scheme like this, your API clients will need to be aware of the mechanism you choose if they want to react differently to application errors versus infrastructure errors, so just document it clearly.
I tend to follow the "do what's RESTful as long as it makes sense" line of thinking.
Let's say you have an API that looks like this:
/api/v1/domains/<name>/
Hitting /api/v1/domain/exists.com/ could then return a 200 with some whois information.
Hitting /api/v1/domain/doesnt.com/ could return a 404 with links to purchase options.
That would probably work. If the returned content follows a strict format (e.g. a JSON response with a results key) then your API's responses can be differentiated from your proxies' responses.
Alternatively, you could offer
/api/v1/domains/?search=maybe
/api/v1/domains/?lookup=maybe.com
This is now slightly less RESTful but it's still self-describing and (in my opinion) not that bad. Now every response can be a 200 and your content can reveal the results.
I swear I saw this once:
A website that just echoes back the request info (headers, url, method, params, etc) of all requests that come in.
Sort of like the opposite of hurl.it
Found it: httpbin is a webservice for testing http clients.
the response just tells you the request it received.
You could also try Fiddler if you want a desktop application for doing this!
I made an application using Qt/C++ that reads some values every 5-7 seconds and sends them to a website.
My approach is very simple. I am just reading the values i want to send and then i make an HTTP POST to the website. I also send the username and password to the website.
The problem is that i cannot find out if the request is successful. I mean that if i send the request and server gets it, i will get an HTTP:200 always. For example if the password is not correct, there is no way to know it. It is the way HTTP works.
Now i think i will need some kind of a protocol to take care the communication between the application and the website.
The question is what protocol to use?
If the action performed completes before the response header is sent you have the option of adding a custom status to it. If your website is built on PHP you can call header() to add the custom status of the operation.
header('XAppRequest-Status: complete');
if you can modify the server side script you could do the following
on one end :
You can make the HTTP post request via ajax
and evaluate the result of the ajax request.
On the serve side
On the HTTP request you do your process and if everything goes accordingly you can send data back to the ajax script that called it.
solves your problem .. ?