api request returns json files and not html/xml browser content - facebook-graph-api

I am sending get httpwebrequests to the facebook graph api and all was working fine till I deployed to production server and now module that expects html/xml response is not working and when tested url in internet explorer, the save file dialog pops up and the file needs to be saved.
Other modules also send requests to the facebook graph but just differ in the form of requests so not sure what is going on here.
Any ideas appreciated
Edit:
Let me try and rephrase this. On my production server the httpwebrequest was not returning the correct result. So to Test it I copied the url http://graph.facebook.com/pepsi which is an example, should return the profile info viewable in the browser. The server has internet explorer v8 and I am not sure why it tries to download the file instead of displaying it in the browser. this is what is happening in my code and when I make a request to a different part of the api, then it works in my app but not in the browser

Your question is not very clear. From what I gather, you want the display the JSON response in a browser. Instead, you are being asked to download a file by the browser.
Well, this is normal behaviour. The response you get from Facebook would most likely have a MIME type of application/json. Most newer web browsers display the text in the browser itself. Some browsers, however don't know how to handle this content type and just ask you to download the file.
You mentioned that your module expects an html/xml response. Try changing this to application/json.
You also said that it works in your app but not in your browser. I don't know what you're making, but generally you wouldn't show raw json to the user in a browser, right?

Related

Postman cookies not set for subdomain (Postman Inceptor, Postman Native App)

i am playing around with Postman to get some insight on how things work behind the curtain and ran into, what I believe, is an issue but wanted to ask before I create a new issue on GitHub.
I am intercepting the request from my browser to the same site using the Postman Interceptor to use the request values in the native app. I have cookies enabled and the site (the whole domain) whitelisted.
When I use the history to resend the same request that was captured I get an auth error that is caused by the fact that the cookies are not included in the request (found that out by checking the cURL code snippet). I believe the reason for that is, that the cookies are set under another sub domain than that the request is send to.
I will try to include some pictures to clarify. My question here is:
Am I missing something/did I set something up in the wrong way
or is this an issue and I should create an issue in the official Postman Github page
cURL request
Cookies in Postman Native App
you should see if cookie is being send not using code snippet but the console :
its indeed sending cookies ,

Cookie set serverside but not displaying in document.cookie

I'm trying to implement an answer from another question on this site:
Detect when browser receives file download
I've followed all of the steps and everything is working up to the point where I try to retrieve the cookie. When I use Firebug I can see the cookie that I created in the header response, along with a cookie that was created earlier in the app by javascript.
The info in firebug for the two cookies is:
name:earlierCookie,value:1234,Domain:localhost,Path:/,Expires:Session,HttpOnly:false
name:cookiefromServer,value:5678,Domain:localhost,Path:/resource/upload/file,Expires:Session,HttpOnly:false
So, you can see that the cookies are in the same domain (they have different paths). When looking at document.cookie, only earlierCookie is present.
Why can I see cookieFromServer in Firebug and not in document.cookie?
Also, please tell me if I need to post more info.
I figured this out on my own. The problem is the path. Setting path to / from the server allows the cookie to show up in document.cookie I have no idea why this is and can't find good resources explaining it.

how to prevent cross site access

ok i just did a test and it turns out that this is not blocked by the browser - which i kind of assumed because its the same way you can load jquery and still have access to the data it loads:
i have a script located here:
https://securedomain.com/cookie_test.php - it prints out the cookies as json
if i run this file here:
http://myotherdomain.com/cookie_test.php
the javascript loads with this content:
but it does not load with an AJAX request because of the same origin policy
it also looks like in order to work in the script tag, it has to be a valid js statement like:
data={secure_data:true}
but if i do it as just plain JSON, then it causes a javascript error:
{secure_data:true}
so am I correct in assuming that the data output in this file is safe, as long as i output JUST json in the json format? that it can't be retrieved by any other site on the clients browser?
Should be, but if you are worried, you could always add a comment around the json, load it into the browser as text fron ajax, revert the commenting in javascript and then do a JSON.parse.
Making all cookies available this way looks like a bad idea from the side of secure domain though, as it will kill sensible things like HttpOnly cookies.

How to open the default browser in background and get the source code of a web page?

I'm using Dev-C++ and i'm looking for a mode to open(...or better...i need to load a browser intance in the background) the default browser (Example I.E.) and send a request to get the source code of the page I requested.
Can I do something like this in C++?
Thank you!
P.S. I need this for Windows
You seem to have imagined the wrong solution for your problem. If you want to get the HTML source for a web page, you don't need to somehow do it through the browser. You need to do whatever the browser does to get it.
When you enter an address into a browser, the browser sends a HTTP GET request to the server that hosts the resource you're requesting (often a web page) and the server sends a HTTP response back containing the resource content (often HTML) back.
You want to do the same in your application. You need to send a HTTP request to the server and read the response. A popular library for doing this is libcurl.
If you don't have to send a POST request (i.e. just a simple web request or with parameters passed on the URL (GET), then you could just use URLDownloadToFile().
If you don't want to use callbacks etc. and just download the file, you can call it rather simple:
URLDownloadToFile(0, "http://myserver/myfile", "C:\\mytempfile", 0, 0);
There are also a few other functions provided that will automatically push the downloaded data to a stream rather than a file.
It can't be done in pure C++. You should use native Windows library or other (like Qt Framework) and use it's capabilities of getting and parsing website. In Qt, you'd use QtWebkit.
edit: also if you want only the source code of a page, you can do this without using browser or their engines, you can use Winsock.

WinHttp Gets 404 File Not Found

I am grabbing a webpage with WinHttp and the resulting page is the site's 404 file not found page. I know that the code works as I have tested it with other websites. The page in question is a normal http protocol and .html file.
What can I do?
You don't give a whole lot to go on. I'd probably start with a trace of the HTTP session from your WinHttp calls and compare it with a trace from a working browser-based session and see what's different. Could be anything from a cookie to a referer field to who-knows-what that the server might not like.