I'm writing a program to log into a game a get some information from the account.
After making a post request with username and password, I make a get request in the same location in order to download the needed html source.
However, doing qDebug()<<QString(reply->readAll());
prints "\u001F?\b" ,instead of the entire source code of the page.
The get reply has status code 200, and the error() function returns NetworkError(NoError).
For the post and get requests I'm using header information obtained from chrome's network tab in developer options combined with cookies obtained from previous response headers.
I'm doing a get request after the log-in post request because that's what seems to happen in the actual webpage, as displayed in developer options.
The response might be gzipped. Does unzipping happen to yield the expected result?
Related
Using Postman, when I make a PUT request to an endpoint which returns a 204 with content, Postman is unable to parse the response, and my collection runner stops that iteration, indicating that an error has occurred.
When run outside of the runner, Postman displays the following:
Other people have also had this problem
Unfortunately I cannot fix the non-standard endpoint. Is there a workaround that will let Postman continue without throwing an error, especially when using the collection runner?
The 204 (204 NO CONTENT) response from the server means that the server processed your request successfully and a response is not needed.
More here: https://httpstatuses.com/204
Actually as much as I know, if the server is sending a 204 with a payload response, the endpoint is not developed as it should.
This would be the main reason Postman is not showing a response payload. You will only be able to read response headers.
So if you send a PUT request, and only receive headers, it means everything is ok. If you spect data the server should be responding with a 200 code.
Now, said this, if postman is telling you that “it could not get any response” it means basically the server is not responding any thing. Now try to increase the timeout in the postman settings. It’s very probable that the server is taking to much time. Check outside the runner how much time it’s taking to response.
I hope this helps you.
I've run into a few problems with setting cookies, and based on the reading I've done, this should work, so I'm probably missing something important.
This situation:
Previously I received responses from my API and used JavaScript to save them as cookies, but then I found that using the set-cookie response header is more secure in a lot of situations.
I have 2 cookies: "nuser" (contains a username) and key (contains a session key). nuser shouldn't be httpOnly so that JavaScript can access it. Key should be httpOnly to prevent rogue scripts from stealing a user's session. Also, any request from the client to my API should contain the cookies.
The log-in request
Here's my current implementation: I make a request to my login api at localhost:8080/login/login (keep in mind that the web-client is hosted on localhost:80, but based on what I've read, port numbers shouldn't matter for cookies)
First the web-browser will make an OPTIONS request to confirm that all the headers are allowed. I've made sure that the server response includes access-control-allow-credentials to alert the browser that it's okay to store cookies.
Once it's received the OPTIONS request, the browser makes the actual POST request to the login API. It sends back the set-cookie header and everything looks good at this point.
The Problems
This set-up yields 2 problems. Firstly, though the nuser cookie is not httpOnly, I don't seem to be able to access it via JavaScript. I'm able to see nuser in my browser's cookie option menu, but document.cookie yeilds "".
Secondly, the browser seems to only place the Cookie request header in requests to the exact same API (the login API):
But, if I do a request to a different API that's still on my localhost server, the cookie header isn't present:
Oh, and this returns a 406 just because my server is currently configured to do that if the user isn't validated. I know that this should probably be 403, but the thing to focus on in this image is the fact that the "cookie" header isn't included among the request headers.
So, I've explained my implementation based on my current understanding of cookies, but I'm obviously missing something. Posting exactly what the request and response headers should look like for each task would be greatly appreciated. Thanks.
Okay, still not exactly what was causing the problem with this specific case, but I updated my localhost:80 server to accept api requests, then do a subsequent request to localhost:8080 to get the proper information. Because the set-cookie header is being set by localhost:80 (the client's origin), everything worked fine. From my reading before, I thought that ports didn't matter, but apparently they do.
I'm trying to publish a post on my facebook page using RestFB.
My code is as follows:
FacebookType publishResponse = facebookClient.publish(pageId + "/feed", FacebookType.class,
Parameter.with("message", message),
Parameter.with("picture", picture),
Parameter.with("link", link),
Parameter.with("description", description));
And my parameters have the following values:
message: Test+test+test
picture: https%3A%2F%2Fcom-smallteaser-local-photo.s3.amazonaws.com%2Fskydivemag%25232fdefcfa-c7b2-4c0d-8504-9942ccd9a4b0%2523648%25230%25232592%25232592%2523292%2523292
link: http%3A%2F%2Flocalhost%3A9000%2Farticle%2F20130503-test-test-test
description: This+is+just+a+test
I am getting the exception:
FacebookOAuthException: Received Facebook error response of type OAuthException: (#100) picture URL is not properly formatted]
I read here that i can add a picture with just providing an URL and it specifically says that it is meant for 'App developers who host their images on Amazon S3 or a similar service'.
Any idea what i'm doing wrong?
I think it’s not actually the “formatting” of the picture URL, but the content it returns:
https://com-smallteaser-local-photo.s3.amazonaws.com/skydivemag%232fdefcfa-c7b2-4c0d-8504-9942ccd9a4b0%23648%230%232592%232592%23292%23292
is delivered with a Content-Type: application/octet-stream response header (as you can see here) – and that might make Facebooks scraper think that this is not really an image resource.
So you will have to figure out how configure your hosting space to deliver these images with a correct Content-Type, for example img/jpeg or img/png.
I got this problem, but only on older Android devices, not on a desktop. I could see in the server logs that there was a difference:
When accessing the URL on a desktop, Facebook does request the picture URL.
When accessing the URL on an older Android device, Facebook does not request the picture URL.
It turned out that I was using window.location.origin in constructing the absolute URL, which according to http://www.hyperink.com/blog/?p=18 only works on Webkit. It was solved by replacing, as the post suggests,
window.location.origin
by
window.location.protocol + “//” + window.location.hostname
I am sending get httpwebrequests to the facebook graph api and all was working fine till I deployed to production server and now module that expects html/xml response is not working and when tested url in internet explorer, the save file dialog pops up and the file needs to be saved.
Other modules also send requests to the facebook graph but just differ in the form of requests so not sure what is going on here.
Any ideas appreciated
Edit:
Let me try and rephrase this. On my production server the httpwebrequest was not returning the correct result. So to Test it I copied the url http://graph.facebook.com/pepsi which is an example, should return the profile info viewable in the browser. The server has internet explorer v8 and I am not sure why it tries to download the file instead of displaying it in the browser. this is what is happening in my code and when I make a request to a different part of the api, then it works in my app but not in the browser
Your question is not very clear. From what I gather, you want the display the JSON response in a browser. Instead, you are being asked to download a file by the browser.
Well, this is normal behaviour. The response you get from Facebook would most likely have a MIME type of application/json. Most newer web browsers display the text in the browser itself. Some browsers, however don't know how to handle this content type and just ask you to download the file.
You mentioned that your module expects an html/xml response. Try changing this to application/json.
You also said that it works in your app but not in your browser. I don't know what you're making, but generally you wouldn't show raw json to the user in a browser, right?
I'm stuck in a cookie related question. I want to write a program that can automate download the attachments of this forum. So I should maintain the cookies this site send to me. When I send a GET request in my program to the login page, I got the cookie such as Set-Cookie: sso_sid=0589a967; domain=.it168.com in my program. Now if I use a cookie viewer such as cookie monster and send the same GET request, my program get the same result, but the cookie viewer shows that the site also send me two cookies which are:
testcookie http://get2know.it/myimages/2009-12-27_072438.jpg and token http://get2know.it/myimages/2009-12-27_072442.jpg
My question is: Where did the two cookie came from? Why they did not show in my program?
Thanks.
Your best bet to figure out screen-scraping problems like this one is to use Fiddler. Using Fiddler, you can compare exactly what is going over the wire in your app vs. when accessing the site from a browser. I suspect you'll see some difference between headers sent by your app vs. headers sent by the browser-- this will likley account for the difference you're seeing.
Next, you can do one of two things:
change your app to send exactly the headers that the browser does (and, if you do this, you should get exactly the response that a real browser gets).
using Fiddler's "request builder" feature, start removing headers one by one and re-issuing the request. At some point, you'll remove a header which makes the response not match the response you're looking for. That means that header is required. Continue for all other headers until you have a list of headers that are required by the site to yield the response you want.
Personally, I like option #2 since it requires a minimum amount of header-setting code, although it's harder initially to figure out which headers the site requires.
On your actual question of why you're seeing 2 cookies, only the diagnosis above will tell you for sure, but I suspect it may have to do with the mechanism that some sites use to detect clients who don't accept cookies. On the first request in a session, many sites will "probe" a client to see if the client accepts cookies. Typically they'll do this:
if the request doesn't have a cookie on it, the site will redirect the client to a special "cookie setting" URL.
The redirect response, in addition to having a Location: header which does the redirect, will also return a Set-Cookie header to set the cookie. The redirect will typically contain the original URL as a query string parameter.
The server-side handler for the "cookie setter" page will then look at the incoming cookie. If it's blank, this means that the user's browser is set to not accept cookies, and the site will typically redirect the user to a "sorry, you must use cookies to use this site" page.
If, however, there is a cookie header send to the "cookie setter" URL, then the client does in fact accept cookies, and the handler will simply redirect the client back to the original URL.
The original URL, once you move on to the next page, may add an additional cookie (e.g. for a login token).
Anyway, that's one way you could end up with two cookies. Only diagnosis with Fiddler (or a similar tool) will tell you for sure, though.