According to my understandings, you should not post to get data.
For example, I'm on a project and we are posting to get data.
For example, this following.
{
"zipCOde":"85022",
"city":"PHOENIX"
"country":"US"
"products":[
{
"sku":"abc-21",
"qty":2
},
{
"sku":"def-13",
"qty":2
}
]
}
Does it make sense to post? How could this be done without posting? There could be 1 or more products.
Actually there is a SEARCH method in HTTP, but sadly it is for webdav. https://msdn.microsoft.com/en-us/library/aa143053(v=exchg.65).aspx So if you want to send a request body with the request, then you can try with that.
POSTing is okay if you have a complex search. Complex search is relative, by me it means, that you have different logical operators in your query.
The current one is not that complex, and you can put the non-hierarchical components into the query string of the URI. An example with additional line breaks:
GET /products/?
zipCOde=85022&
city=PHOENIX&
country=US&
filters[0]['sku']=abc-21&
filters[0]['qty']=2&
filters[1]['sku']=def-13&
filters[1]['qty']=2
You can choose a different serialization format and encode it as URI component if you want.
GET /products/?filter={"zipCOde":"85022","city":"PHOENIX","country":"US","products":[{"sku":"abc-21","qty":2},{"sku":"def-13","qty":2}]}
One potential option is to JSON.serialize your object and send it as a query string parameter on the GET.
Is there a common pattern for using Backbone with a service that filters a collection on the server? I haven't been able to find anything in Google and Stack Overflow searches, which is surprising, given the number of Backbone apps in production.
Suppose I'm building a new front end for Stack Overflow using Backbone.
On the search screen, I need to pass the following information to the server and get back a page worth of results.
filter criteria
sort criteria
results per page
page number
Backbone doesn't seem to have much interest in offloading filtering to the server. It expects the server to return the entire list of questions and perform filtering on the client side.
I'm guessing that in order to make this work I need to subclass Collection and override the fetch method so that rather than always GETting data from the same RESTful URL, it passes the above parameters.
I don't want to reinvent the wheel. Am I missing a feature in Backbone that would make this process simpler or more compatible with existing components? Is there already a well-established pattern to solve this problem?
If you just want to pass GET parameters on a request, you should just be able to specify them in the fetch call itself.
collection.fetch( {
data: {
sortDir: "ASC",
totalResults: 100
}
} );
The options passed into fetch should directly translate to a jQuery.ajax call, and a data property should automatically get parsed. Of course overriding the fetch method is fine too, especially if you want to standardize portions of the logic.
You're right, creating your own Collection is the way to go, as there are not standards about server pagination except OData.
Instead of overriding 'fetch', what I usually do in these cases is create a collection.url property as a function, an return the proper URL based on the collection state.
In order to do pagination, however, the server must return to you the total number of items so you can calculate how many pages based on X items per page. Nowadays some APIs are using things like HAL or HATEOAS, which are basically HTTP response headers. To get that information, I normally add a listener to the sync event, which is raised after any AJAX operation. If you need to notify external components (normally the view) of the number of available items/pages, use an event.
Simple example: your server returns X-ItemTotalCount in the response headers, and expects parameters page and items in the request querystring.
var PagedCollection = Backbone.Collection.extend({
initialize: function(models,options){
this.listenTo(this, "sync", this._parseHeaders);
this.currentPage = 0;
this.pageSize = 10;
this.itemCount = 0;
},
url: function() {
return this.baseUrl + "?page=" + this.currentPage + "&items=" + this.pageSize;
},
_parseHeaders: function(collection,response){
var totalItems = response.xhr.getResponseHeader("X-ItemTotalCount");
if(totalItems){
this.itemCount = parseInt(totalItems);
//trigger an event with arguments (collection, totalItems)
this.trigger("pages:itemcount", this, this.itemCount);
}
}
});
var PostCollection = PagedCollection.extend({
baseUrl: "/posts"
});
Notice we use another own property, baseUrl to simplify extending the PagedCollection. If you need to add your own initialize, call the parent's prototype one like this, or you won't parse the headers:
PagedCollection.protoype.initialize.apply(this,arguments)
You can even add fetchNext and fetchPrevious methods to the collection, where you simply modify this.currentPage and fetch. Remember to add {reset:true} as fetch options if you want to replace one page with the other instead of appending.
Now if your backend for the project is consistent, any resource that allows pagination on the server may be represented using one PagedCollection-based collection on the client, given the same parameters/responses are used.
I have an ASP.net web service that I'm using for a web application which returns a either XML or JSON data to me, depending on the function I call. This has been working well thus far, but I've run into a problem. I want to create an "export" link on my page that will download a JSON file. The link is formatted very simply:
Export This Item
As you might imagine, this should export item 2. So far so good, yes?
Problem is that since I'm not specifically requesting that the accepted content type is JSON, ASP.net absolutely refuses to send back anything but XML, which just isn't appropriate for this situation. The code is essentially as follows:
[WebMethod]
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
public Item ExportItem(int itemId)
{
Context.Response.AddHeader("content-disposition", "attachment; filename=export.json"); //Makes it a download
return GetExportItem(itemId);
}
Despite my specifying the ResponseFormat as JSON, I always get back XML unless I request this method via AJAX (using Google Web Toolkit, BTW):
RequestBuilder builder = new RequestBuilder(RequestBuilder.POST, "mywebserviceaddress/ExportFunc");
builder.setHeader("Content-type","application/json; charset=utf-8");
builder.setHeader("Accepts","application/json");
builder.sendRequest("{\"itemId\":2}", new RequestCallback(){...});
That's great, but AJAX won't give me a download dialog. Is there any way to force ASP.net to give me back JSON, regardless of how the data is requested? It would seem to me that not having a manual override for this behavior is a gross design oversight.
QUICK ANSWER:
First off, let me say that I think that womp's answer is probably the better way to go long term (Convert to WCF), but deostroll led me to the answer that I'll be using for the immediate future. Also, it should be noted that this seems to work primarily because I wanted just a download, may not work as well in all situations. In any case, here's the code that I ended up using to get the result I wanted:
[WebMethod]
[ScriptMethod(ResponseFormat = ResponseFormat.Json)]
public void ExportItem(int itemId)
{
Item item = GetExportItem(itemId);
JavaScriptSerializer js = new JavaScriptSerializer();
string str = js.Serialize(item);
Context.Response.Clear();
Context.Response.ContentType = "application/json";
Context.Response.AddHeader("content-disposition", "attachment; filename=export.json");
Context.Response.AddHeader("content-length", str.Length.ToString());
Context.Response.Flush();
Context.Response.Write(str);
}
Please note the return type of void (which means that your WDSL will be next to useless for this function). Returning anything will screw up the response that is being hand-built.
Asp.net web services are SOAP-based web services. They'll always return XML. The Ajax libraries came along and the ScriptMethod stuff was introduced, but it doesn't change the underlying concept of it.
There's a couple things you can do.
WebMethods are borderline obsolete with the introduction of WCF. You might consider migrating your web services to WCF, in which you'll have much greater control over the output format.
If you don't want to do that, you can manually serialize the result of your webservice calls into JSON, and the service will wrap that in a SOAP header. You would then need to strip out the SOAP stuff.
Here are two forums threads for your reference:
http://forums.asp.net/t/1118828.aspx
http://forums.asp.net/p/1054378/2338982.aspx#2338982
I have no clear idea. They say on concentrating on setting the content type to application/json. I haven't worked with wcf before, but I think you can make use of the Response object.
Set the content type on the response object. Do a response.write passing your json data as string and then do a response.end.
Just thought I'd throw this out there since it wasn't mentioned previously... if you use WebServices with ASP.NET 3.5, JSON is the default return format. It also comes along with JSON serializer so you can stop using the JavascriptSerializer.
This article on Rick Strahl's blog talks about the strongly-typed conversion you can do between server side classes and JSON objects from the client.
I've recently completed a project using this new JSON stuff in .NET 3.5, and I'm extremely impressed with the performance. Maybe it's worth a look...
What proven design patterns exist for batch operations on resources within a REST style web service?
I'm trying to be strike a balance between ideals and reality in terms of performance and stability. We've got an API right now where all operations either retrieve from a list resource (ie: GET /user) or on a single instance (PUT /user/1, DELETE /user/22, etc).
There are some cases where you want to update a single field of a whole set of objects. It seems very wasteful to send the entire representation for each object back and forth to update the one field.
In an RPC style API, you could have a method:
/mail.do?method=markAsRead&messageIds=1,2,3,4... etc.
What's the REST equivalent here? Or is it ok to compromise now and then. Does it ruin the design to add in a few specific operations where it really improves the performance, etc? The client in all cases right now is a Web Browser (javascript application on the client side).
A simple RESTful pattern for batches is to make use of a collection resource. For example, to delete several messages at once.
DELETE /mail?&id=0&id=1&id=2
It's a little more complicated to batch update partial resources, or resource attributes. That is, update each markedAsRead attribute. Basically, instead of treating the attribute as part of each resource, you treat it as a bucket into which to put resources. One example was already posted. I adjusted it a little.
POST /mail?markAsRead=true
POSTDATA: ids=[0,1,2]
Basically, you are updating the list of mail marked as read.
You can also use this for assigning several items to the same category.
POST /mail?category=junk
POSTDATA: ids=[0,1,2]
It's obviously much more complicated to do iTunes-style batch partial updates (e.g., artist+albumTitle but not trackTitle). The bucket analogy starts to break down.
POST /mail?markAsRead=true&category=junk
POSTDATA: ids=[0,1,2]
In the long run, it's much easier to update a single partial resource, or resource attributes. Just make use of a subresource.
POST /mail/0/markAsRead
POSTDATA: true
Alternatively, you could use parameterized resources. This is less common in REST patterns, but is allowed in the URI and HTTP specs. A semicolon divides horizontally related parameters within a resource.
Update several attributes, several resources:
POST /mail/0;1;2/markAsRead;category
POSTDATA: markAsRead=true,category=junk
Update several resources, just one attribute:
POST /mail/0;1;2/markAsRead
POSTDATA: true
Update several attributes, just one resource:
POST /mail/0/markAsRead;category
POSTDATA: markAsRead=true,category=junk
The RESTful creativity abounds.
Not at all -- I think the REST equivalent is (or at least one solution is) almost exactly that -- a specialized interface designed accommodate an operation required by the client.
I'm reminded of a pattern mentioned in Crane and Pascarello's book Ajax in Action (an excellent book, by the way -- highly recommended) in which they illustrate implementing a CommandQueue sort of object whose job it is to queue up requests into batches and then post them to the server periodically.
The object, if I remember correctly, essentially just held an array of "commands" -- e.g., to extend your example, each one a record containing a "markAsRead" command, a "messageId" and maybe a reference to a callback/handler function -- and then according to some schedule, or on some user action, the command object would be serialized and posted to the server, and the client would handle the consequent post-processing.
I don't happen to have the details handy, but it sounds like a command queue of this sort would be one way to handle your problem; it'd reduce the overall chattiness substantially, and it'd abstract the server-side interface in a way you might find more flexible down the road.
Update: Aha! I've found a snip from that very book online, complete with code samples (although I still suggest picking up the actual book!). Have a look here, beginning with section 5.5.3:
This is easy to code but can result in
a lot of very small bits of traffic to
the server, which is inefficient and
potentially confusing. If we want to
control our traffic, we can capture
these updates and queue them locally
and then send them to the server in
batches at our leisure. A simple
update queue implemented in JavaScript
is shown in listing 5.13. [...]
The queue maintains two arrays. queued
is a numerically indexed array, to
which new updates are appended. sent
is an associative array, containing
those updates that have been sent to
the server but that are awaiting a
reply.
Here are two pertinent functions -- one responsible for adding commands to the queue (addCommand), and one responsible for serializing and then sending them to the server (fireRequest):
CommandQueue.prototype.addCommand = function(command)
{
if (this.isCommand(command))
{
this.queue.append(command,true);
}
}
CommandQueue.prototype.fireRequest = function()
{
if (this.queued.length == 0)
{
return;
}
var data="data=";
for (var i = 0; i < this.queued.length; i++)
{
var cmd = this.queued[i];
if (this.isCommand(cmd))
{
data += cmd.toRequestString();
this.sent[cmd.id] = cmd;
// ... and then send the contents of data in a POST request
}
}
}
That ought to get you going. Good luck!
While I think #Alex is along the right path, conceptually I think it should be the reverse of what is suggested.
The URL is in effect "the resources we are targeting" hence:
[GET] mail/1
means get the record from mail with id 1 and
[PATCH] mail/1 data: mail[markAsRead]=true
means patch the mail record with id 1. The querystring is a "filter", filtering the data returned from the URL.
[GET] mail?markAsRead=true
So here we are requesting all the mail already marked as read. So to [PATCH] to this path would be saying "patch the records already marked as true"... which isn't what we are trying to achieve.
So a batch method, following this thinking should be:
[PATCH] mail/?id=1,2,3 <the records we are targeting> data: mail[markAsRead]=true
of course I'm not saying this is true REST (which doesnt permit batch record manipulation), rather it follows the logic already existing and in use by REST.
Your language, "It seems very wasteful...", to me indicates an attempt at premature optimization. Unless it can be shown that sending the entire representation of objects is a major performance hit (we're talking unacceptable to users as > 150ms) then there's no point in attempting to create a new non-standard API behaviour. Remember, the simpler the API the easier it is to use.
For deletes send the following as the server doesn't need to know anything about the state of the object before the delete occurs.
DELETE /emails
POSTDATA: [{id:1},{id:2}]
The next thought is that if an application is running into performance issues regarding the bulk update of objects then consideration into breaking each object up into multiple objects should be given. That way the JSON payload is a fraction of the size.
As an example when sending a response to update the "read" and "archived" statuses of two separate emails you would have to send the following:
PUT /emails
POSTDATA: [
{
id:1,
to:"someone#bratwurst.com",
from:"someguy#frommyville.com",
subject:"Try this recipe!",
text:"1LB Pork Sausage, 1 Onion, 1T Black Pepper, 1t Salt, 1t Mustard Powder",
read:true,
archived:true,
importance:2,
labels:["Someone","Mustard"]
},
{
id:2,
to:"someone#bratwurst.com",
from:"someguy#frommyville.com",
subject:"Try this recipe (With Fix)",
text:"1LB Pork Sausage, 1 Onion, 1T Black Pepper, 1t Salt, 1T Mustard Powder, 1t Garlic Powder",
read:true,
archived:false,
importance:1,
labels:["Someone","Mustard"]
}
]
I would split out the mutable components of the email (read, archived, importance, labels) into a separate object as the others (to, from, subject, text) would never be updated.
PUT /email-statuses
POSTDATA: [
{id:15,read:true,archived:true,importance:2,labels:["Someone","Mustard"]},
{id:27,read:true,archived:false,importance:1,labels:["Someone","Mustard"]}
]
Another approach to take is to leverage the use of a PATCH. To explicitly indicate which properties you are intending to update and that all others should be ignored.
PATCH /emails
POSTDATA: [
{
id:1,
read:true,
archived:true
},
{
id:2,
read:true,
archived:false
}
]
People state that PATCH should be implemented by providing an array of changes containing: action (CRUD), path (URL), and value change. This may be considered a standard implementation but if you look at the entirety of a REST API it is a non-intuitive one-off. Also, the above implementation is how GitHub has implemented PATCH.
To sum it up, it is possible to adhere to RESTful principles with batch actions and still have acceptable performance.
The google drive API has a really interesting system to solve this problem (see here).
What they do is basically grouping different requests in one Content-Type: multipart/mixed request, with each individual complete request separated by some defined delimiter. Headers and query parameter of the batch request are inherited to the individual requests (i.e. Authorization: Bearer some_token) unless they are overridden in the individual request.
Example: (taken from their docs)
Request:
POST https://www.googleapis.com/batch
Accept-Encoding: gzip
User-Agent: Google-HTTP-Java-Client/1.20.0 (gzip)
Content-Type: multipart/mixed; boundary=END_OF_PART
Content-Length: 963
--END_OF_PART
Content-Length: 337
Content-Type: application/http
content-id: 1
content-transfer-encoding: binary
POST https://www.googleapis.com/drive/v3/files/fileId/permissions?fields=id
Authorization: Bearer authorization_token
Content-Length: 70
Content-Type: application/json; charset=UTF-8
{
"emailAddress":"example#appsrocks.com",
"role":"writer",
"type":"user"
}
--END_OF_PART
Content-Length: 353
Content-Type: application/http
content-id: 2
content-transfer-encoding: binary
POST https://www.googleapis.com/drive/v3/files/fileId/permissions?fields=id&sendNotificationEmail=false
Authorization: Bearer authorization_token
Content-Length: 58
Content-Type: application/json; charset=UTF-8
{
"domain":"appsrocks.com",
"role":"reader",
"type":"domain"
}
--END_OF_PART--
Response:
HTTP/1.1 200 OK
Alt-Svc: quic=":443"; p="1"; ma=604800
Server: GSE
Alternate-Protocol: 443:quic,p=1
X-Frame-Options: SAMEORIGIN
Content-Encoding: gzip
X-XSS-Protection: 1; mode=block
Content-Type: multipart/mixed; boundary=batch_6VIxXCQbJoQ_AATxy_GgFUk
Transfer-Encoding: chunked
X-Content-Type-Options: nosniff
Date: Fri, 13 Nov 2015 19:28:59 GMT
Cache-Control: private, max-age=0
Vary: X-Origin
Vary: Origin
Expires: Fri, 13 Nov 2015 19:28:59 GMT
--batch_6VIxXCQbJoQ_AATxy_GgFUk
Content-Type: application/http
Content-ID: response-1
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Date: Fri, 13 Nov 2015 19:28:59 GMT
Expires: Fri, 13 Nov 2015 19:28:59 GMT
Cache-Control: private, max-age=0
Content-Length: 35
{
"id": "12218244892818058021i"
}
--batch_6VIxXCQbJoQ_AATxy_GgFUk
Content-Type: application/http
Content-ID: response-2
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Date: Fri, 13 Nov 2015 19:28:59 GMT
Expires: Fri, 13 Nov 2015 19:28:59 GMT
Cache-Control: private, max-age=0
Content-Length: 35
{
"id": "04109509152946699072k"
}
--batch_6VIxXCQbJoQ_AATxy_GgFUk--
From my point of view I think Facebook has the best implementation.
A single HTTP request is made with a batch parameter and one for a token.
In batch a json is sent. which contains a collection of "requests".
Each request has a method property (get / post / put / delete / etc ...), and a relative_url property (uri of the endpoint), additionally the post and put methods allow a "body" property where the fields to be updated are sent .
more info at: Facebook batch API
I would be tempted in an operation like the one in your example to write a range parser.
It's not a lot of bother to make a parser that can read "messageIds=1-3,7-9,11,12-15". It would certainly increase efficiency for blanket operations covering all messages and is more scalable.
Great post. I've been searching for a solution for a few days. I came up with a solution of using passing a query string with a bunch IDs separated by commas, like:
DELETE /my/uri/to/delete?id=1,2,3,4,5
...then passing that to a WHERE IN clause in my SQL. It works great, but wonder what others think of this approach.