Online JSONP converter/wrapper - web-services

I would like to fetch a source of file and wrap it within JSONP.
For example, I want to retrieve pets.txt as text from a host I don't own. I want to do that by using nothing but client-side JavaScript.
I'm looking for online service which can convert anything to JSONP.
YQL
Yahoo Query Language is one of them.
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D"http://elv1s.ru/x/pets.txt"&format=json&callback=grab
This works if URL is not blocked by robots.txt. YQL have respect to robots.txt. I can't fetch http://userscripts.org/scripts/source/62706.user.js because it blocked via robots.txt.
http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20html%20where%20url%3D"http://userscripts.org/scripts/source/62706.user.js"&format=json&callback=grab
"forbidden":"robots.txt for the domain disallows crawling for url: http://userscripts.org/scripts/source/62706.user.js"
So I'm looking for another solutions.

I built jsonpwrapper.com.
It's unstable and slower than YQL, but it doesn't care about robots.txt.

Here's another one, much faster, built on DigitalOcean & CloudFlare, utilizing caching et al: http://json2jsonp.com

Nononono. No. Just please; no. That is not JSONP, it is javascript that executes a function with an object as its parameter that contains more javascript. Aaah!
This is JSON because it's just one object:
{
'one': 1,
'two': 2,
'three':3
}
This is JSONP because it's just one object passed through a function; if you go to http://somesite/get_some_object?jsonp=grab, the server will return:
grab({
'one': 1,
'two': 2,
'three':3
});
This is not JSON at all. It's just Javascript:
alert("hello");
And this? Javascript code stored inside a string (ouch!) inside an object passed to a function that should evaluate the string (but it might or might not):
grab({"body": "alert(\"Hello!\");\n"});
Look at all those semicolons and backslashes! I get nightmares from this kind of stuff. It's like a badly written Lisp macro because it's much more complicated than it needs to (and should!) be. Instead, define a function called grab in your code:
function grab(message) {
alert(message.body);
}
and then use JSONP to have the server return:
grab({body: "Hello!"});
Don't let the server decide how to run your web page Instead, let your web page decide how to run the web page and just have the server fill in the blanks.
As for an online service that does this? I don't know of any, sorry

I'm not sure what you're trying to do here, but nobody will use something like this. Nobody is going to trust your service to always execute as it should and output expected JavaScript code. You see Yahoo doing it because people trust Yahoo, but they will not trust you.

Related

CFWheels: Redirect to URL with Params Hidden

I am using redirectTo() function with params to redirect to another pages with a query string in the url. For security purpose this does not look appealing because the user can change the parameters in the url, thus altering what is inserted into the database.
My code is:
redirectTo(action="checklist", params="r=#r#&i=#insp#&d=#d#");
Is there anyway around this? I am not using a forms, I just wish to redirect and I want the destination action/Controller to know what I am passing but not display it in the url.
You can obfuscate the variables in the URL. CfWheels makes this really easy.
All you have to do is call set(obfuscateURLs=true) in the config/settings.cfm file to turn on URL obfuscation.
I am sure this works with linkTo() function. I hope it works with RedirectTo() funcation as well. I do not have a set up to check it now. But if doesn't work for RedirectTo(), you can obfuscateParam() and deObfuscateParam() functions to do job for you.
Caution: This will only make harder for user to guess the value. It doesn't encrypt value.
To know more about this, Please read the document configuration and defaults and obfuscating url
A much better approach to this particular situation is to write params to the [flash].1 The flash is exactly the same thing as it is in Ruby on Rails or the ViewBag in ASP.Net. It stores the data in a session or cookie variable and is deleted at the end of the next page's load. This prevents you from posting back long query strings like someone that has been coding for less than a year. ObfuscateParam only works with numbers and is incredibly insecure. Any power user can easily deobfuscate, even more so with someone that actually makes a living stealing data.

RESTservice, resource with two different outputs - how would you do it?

Im currently working on a more or less RESTful webservice, a type of content api for my companys articles. We currently have a resource for getting all the content of a specific article
http://api.com/content/articles/{id}
will return a full set of article data of the given article id.
Currently we control alot of the article's business logic becasue we only serve a native-app from the webservice. This means we convert tags, links, images and so on in the body text of the article, into a protocol the native-app can understand. Same with alot of different attributes and data on the article, we will transform and modify its original (web) state into a state that the native-app will understand.
fx. img tags will be converted from a normal <img src="http://source.com"/> into a <img src="inline-image//{imageId}"/> tag, samt goes for anchor tags etc.
Now i have to implement a resource that can return the articles data in a new representation
I'm puzzled over how best to do this.
I could just implement a completely new resource, on a different url like: content/articles/web/{id} and move the old one to content/article/app/{id}
I could also specify in my documentation of the resource, that a client should always specify a specific request header maybe the Accept header for the webservice to determine which representation of the article to return.
I could also just use the original url, and use a url parameter like .../{id}/?version=app or .../{id}/?version=web
What would you guys reckon would be the best option? My personal preference lean towards option 1, simply because i think its easier to understand for clients of the webservice.
Regards, Martin.
EDIT:
I have chosen to go with option 1. Thanks for helping out and giving pros and cons. :)
I would choose #1. If you need to preserve the existing URLS you could add a new one content/articles/{id}/native or content/native-articles/{id}/. Both are REST enough.
Working with paths make content more easily cacheable than both header or param options. Using Content-Type overcomplicates the service especially when both are returning JSON.
Use the HTTP concept of Content Negotiation. Use the Accept header with vendor types.
Get the articles in the native representation:
GET /api.com/content/articles/1234
Accept: application/vnd.com.exmaple.article.native+json
Get the articles in the original representation:
GET /api.com/content/articles/1234
Accept: application/vnd.com.exmaple.article.orig+json
Option 1 and Option 3
Both are perfectly good solutions. I like the way Option 1 looks better, but that is just aesthetics. It doesn't really matter. If you choose one of these options, you should have requests to the old URL redirect to the new location using a 301.
Option 2
This could work as well, but only if the two responses have a different Content-Type. From the description, I couldn't really tell if this was the case. I would not define a custom Content-Type in this case just so you could use Content Negotiation. If the media type is not different, I would not use this option.
Perhaps option 2 - with the header being a Content-Type?
That seems to be the way resources are served in differing formats; e.g. XML, JSON, some custom format

Read AICC Server response in cross domain implementation

I am currently trying to develop a web activity that a client would like to track via their Learning Management System. Their LMS uses the AICC standard (HACP binding), and they keep the actual learning objects on a separate content repository.
Right now I'm struggling with the types of communication between the LMS and the "course" given that they sit on two different servers. I'm able to retreive the sessionId and the aicc_url from the URL string when the course launches, and I can successfully post values to the aicc_url on the LMS.
The difficulty is that I can not read and parse the return response from the LMS (which is formatted as plain text). AICC stipulates that the course start with posting a "getParam" command to the aicc_url with the session id in order to retrieve information like completion status, bookmarking information from previous sessions, user ID information, etc, all of which I need.
I have tried three different approaches so far:
1 - I started with using jQuery (1.7) and AJAX, which is how I would typically go about a same-server implementation. This returned a "no transport" error on the XMLHttpRequest. After some forum reading, I tried making sure that the ajax call's crossdomain property was set to true, as well as a recommendation to insert $.support.cors = true above the ajax call, neither of which helped.
2 & 3 - I tried using an oldschool frameset with a form in a bottom frame which would submit and refresh with the returned text from the LMS and then reading that via javascript; and then a variation upon that using an iFrame as a target of an actual form with an onload handler to read and parse the contents. Both of these approaches worked in a same-server environment, but fail in the cross-domain environment.
I'm told that all the other courses running off the content repository bookmark as well as track completion, so obviously it is possible to read the return values from the LMS somehow; AICC is pitched frequently as working in cross-server scenarios, so I'm thinking there must be a frequently-used method to doing this in the AICC structure that I am overlooking. My forum searches so far haven't turned up anything that's gotten me much further, so if anyone has any experience in cross-domain AICC implementations I could certainly use recommendations!
The only idea I have left is to try setting up a PHP "relay" form on the same server as the course, and having the front-end page send values to that, and using the PHP to submit those to the LMS, and relay the return text from the LMS to the front-end iframe or ajax call so that it would be perceived as being within the same domain.... I'm not sure if there's a way to solve the issue without going server-side. It seems likely there must be a common solution to this within AICC.
Thanks in advance!
Edits and updates:
For anyone encountering similar problems, I found a few resources that may help explain the problem as well as some alternate solutions.
The first is specific to Plateau, a big player in the LMS industry that was acquired by Successfactors. It's some documentation that provide on setting up a proxy to handle cross-domain content:
http://content.plateausystems.com/ContentIntegration/content/support_files/Cross-domain_Proxlet_Installation.pdf
The second I found was a slide presentation from Successfactors that highlights the challenge of cross-domain content, and illustrates so back-end ideas for resolving it; including the use of reverse proxies. The relevant parts start around slide 21-22 (page 11 in the PDF).
http://www.successfactors.com/static/docs/successconnect/sf/successfactors-content-integration-turley.pdf
Hope that helps anyone else out there trying to resolve the same issues!
The answer in this post may lead you in the right direction:
Best Practice: Legitimate Cross-Site Scripting
I think you are on the right track with setting up a PHP "relay." I think this is similar to choice #1 in the answer from the other post and seems to make most sense with what you described in your question.

Forcing ASP.net webservice to return JSON

I have an ASP.net web service that I'm using for a web application which returns a either XML or JSON data to me, depending on the function I call. This has been working well thus far, but I've run into a problem. I want to create an "export" link on my page that will download a JSON file. The link is formatted very simply:
Export This Item
As you might imagine, this should export item 2. So far so good, yes?
Problem is that since I'm not specifically requesting that the accepted content type is JSON, ASP.net absolutely refuses to send back anything but XML, which just isn't appropriate for this situation. The code is essentially as follows:
[WebMethod]
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
public Item ExportItem(int itemId)
{
Context.Response.AddHeader("content-disposition", "attachment; filename=export.json"); //Makes it a download
return GetExportItem(itemId);
}
Despite my specifying the ResponseFormat as JSON, I always get back XML unless I request this method via AJAX (using Google Web Toolkit, BTW):
RequestBuilder builder = new RequestBuilder(RequestBuilder.POST, "mywebserviceaddress/ExportFunc");
builder.setHeader("Content-type","application/json; charset=utf-8");
builder.setHeader("Accepts","application/json");
builder.sendRequest("{\"itemId\":2}", new RequestCallback(){...});
That's great, but AJAX won't give me a download dialog. Is there any way to force ASP.net to give me back JSON, regardless of how the data is requested? It would seem to me that not having a manual override for this behavior is a gross design oversight.
QUICK ANSWER:
First off, let me say that I think that womp's answer is probably the better way to go long term (Convert to WCF), but deostroll led me to the answer that I'll be using for the immediate future. Also, it should be noted that this seems to work primarily because I wanted just a download, may not work as well in all situations. In any case, here's the code that I ended up using to get the result I wanted:
[WebMethod]
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
public void ExportItem(int itemId)
{
Item item = GetExportItem(itemId);
JavaScriptSerializer js = new JavaScriptSerializer();
string str = js.Serialize(item);
Context.Response.Clear();
Context.Response.ContentType = "application/json";
Context.Response.AddHeader("content-disposition", "attachment; filename=export.json");
Context.Response.AddHeader("content-length", str.Length.ToString());
Context.Response.Flush();
Context.Response.Write(str);
}
Please note the return type of void (which means that your WDSL will be next to useless for this function). Returning anything will screw up the response that is being hand-built.
Asp.net web services are SOAP-based web services. They'll always return XML. The Ajax libraries came along and the ScriptMethod stuff was introduced, but it doesn't change the underlying concept of it.
There's a couple things you can do.
WebMethods are borderline obsolete with the introduction of WCF. You might consider migrating your web services to WCF, in which you'll have much greater control over the output format.
If you don't want to do that, you can manually serialize the result of your webservice calls into JSON, and the service will wrap that in a SOAP header. You would then need to strip out the SOAP stuff.
Here are two forums threads for your reference:
http://forums.asp.net/t/1118828.aspx
http://forums.asp.net/p/1054378/2338982.aspx#2338982
I have no clear idea. They say on concentrating on setting the content type to application/json. I haven't worked with wcf before, but I think you can make use of the Response object.
Set the content type on the response object. Do a response.write passing your json data as string and then do a response.end.
Just thought I'd throw this out there since it wasn't mentioned previously... if you use WebServices with ASP.NET 3.5, JSON is the default return format. It also comes along with JSON serializer so you can stop using the JavascriptSerializer.
This article on Rick Strahl's blog talks about the strongly-typed conversion you can do between server side classes and JSON objects from the client.
I've recently completed a project using this new JSON stuff in .NET 3.5, and I'm extremely impressed with the performance. Maybe it's worth a look...

Best way to decide on XML or HTML response?

I have a resource at a URL that both humans and machines should be able to read:
http://example.com/foo-collection/foo001
What is the best way to distinguish between human browsers and machines, and return either HTML or a domain-specific XML response?
(1) The Accept type field in the request?
(2) An additional bit of URL? eg:
http://example.com/foo-collection/foo001 -> returns HTML
http://example.com/foo-collection/foo001?xml -> returns, er, XML
I do not wish to oblige machines reading the resource to parse HTML (or XHTML for that matter). Machines like the googlebot should receive the HTML response.
It is reasonable to assume I control the machine readers.
If this is under your control, rather than adding a query parameter why not add a file extension:
http://example.com/foo-collection/foo001.html - return HTML
http://example.com/foo-collection/foo001.xml - return XML
Apart from anything else, that means if someone fetches it with wget or saves it from their browser, it'll have an appropriate filename without any fuss.
My preference is to make it a first-class part of the URI. This is debatable, since there are -- in a sense -- multiple URI's for the same resource. And is "format" really part of the URI?
http://example.com/foo-collection/html/foo001
http://example.com/foo-collection/xml/foo001
These are very easy deal with in a web framework that has URI parsing to direct the request to the proper application.
If this is indeed the same resource with two different representations, the HTTP invites you to use the Accept-header as you suggest. This is probably a very reliable way to distinguish between the two different scenarios. You can be plenty sure that user agents (including search engine spiders) send the Accept-header properly.
About the machine agents you are going to give XML; are they under your control? In that case you can be doubly sure that Accept will work. If they do not set this header properly, you can give XML as default. User agents DO set the header properly.
I would try to use the Accept heder for this, because this is exactly what the Accept header is there for.
The problem with having two different URLs is that is is not automatically apparent that these two represent the same underlying resource. This can be bad if a user finds an URL in one program, which renders HTML, and pastes it in the other, which needs XML. At this point a smart user could probably change the URL appropriately, but this is just a source of error that you don't need.
I would say adding a Query String parameter is your best bet. The only way to automatically detect whether your client is a browser(human) or application would be to read the User-Agent string from the HTTP Request. But this is easily set by any application to mimic a browser, you're not guaranteed that this is going to work.