I created a pass and can register device to my server. I also updated this pass by changing some contents and insert a new row of pass to pass table. But why in console, last updated (null) ? this is what I get from console:
Apr 6 10:30:29 CamMobs-iPod4 passd[21] <Warning>: Get serial #s task (for device b6511c0dd47d04da449ce427b27fea74, pass type pass.cam-mob.passbookpasstest, last updated (null); with web service url http://192.168.1.202:8888/passesWebserver/) got response with code 200
Whenever a .pkpass bundle is accepted or replaced in a device's Passbook library, Passbook will tag the pass with a last updated attribute.
This attribute is typically set by the Last-Modified header a webserver sends the first time the pass is downloaded, and your web service sends with every response to the "Get Latest Version of a Pass" response.
Passbook also polls your web service using a "Getting the Serial Numbers for Passes Associated with a Device" method for serialNumbers, using the deviceLibraryIdentifier and passTypeIdentifier as criteria.
The "Getting the Serial Numbers for Passes Associated with a Device" response should contain a tag lastUpdated, indicating when the results of this query were last changed (I.e. when was the last time that a pass using the passTypeIdentifier registered to this deviceLibraryIdentifier was last updated).
However, the very first time Passbook sends a "Getting the Serial Numbers for Passes Associated with a Device", it will not have received a lastUpdated tag which is why it is showing null in your console log. Also, sending a lastUpdated tag is optional, so if it is not present, or if it is not sent correctly, then you will always see last updated (null) for this request.
You are free to use whatever you like as a lastUpdated tag. The simplest solution to implement is a unix timestamp as there is no need to mess around with date formats.
Related
so I am looking into the postmates API and I have been able to create a delivery. This was great, I also setup a webhook url with ngrok to test the response from postmates but I am totally stumped as to how to determine when the pickup was actually completed and the dropoff/delivery was actually completed.
I saved all of the responses in a database and each time I did the test delivery, I received exactly 70 calls from the webhook endpoint. And each time 47 of them were in regards to the 'kind': 'event.delivery_status'. Here are the stats:
THIS IS ALL IN TEST MODE WITH THE SANDBOX...
11 of those are 'status':'pickup_complete'
14 of those are 'status':'pickup'
11 of those are 'status':'dropoff'
11 of those are 'status':'delivered'
all of the webhook responses for status=delivered have a 'data.courier_imminent':false value.
I went to the webpage for the 'data.tracking_url' and when the webpage showed that the delivery was complete, I immediately updated the database to see how many records that I had saved and I was only at 32 total records. this means that the webhook was continuing to send me updates after it was supposedly complete.
Lastly, all of these statuses are not in order, they are totally random, in fact the 6th to last record that was received was a pickup_complete status..
The real question:
how will I know what is actually a picked=completed, delivered=complete etc..
You'll receive a webhook of type event.delivery_status. One of the field within the body of the payload will be {status: "delivered"}. This has been accurate so far. Postmates doesn't return adelivered_at` timestamp, but you could create your own timestamp and store it along with the delivery for reporting.
As for the number of webhooks, Postmates has a delivery robot (called robo) that moves as if it was a real postmate. You'll receive a lot of webhooks of type event.courier_update with the updated location.
Does anybody know how can I handle this?
So, I execute the following code in my controller:
this.get('newEvent').save();
It gets my data and sends it to the server. Here's some data
Request URL:http://localhost:1337/api/v1/events/53a9cfee701b870000dc1d01
Data to send:
event:{
date: "Mon, 18 Aug 2014 07:00:00 GMT"
host: "Host"
repeatable: true
title: "Event name"
}
But for success I need to know the model id on my server which is normally included to the event object I send. Do you know any solutions? By the way DELETE request doesn't exclude id from the event object
You can easily override the saving behavior to include the id by overwriting the updateRecord hook in the RestAdapter: https://github.com/emberjs/data/blob/master/packages/ember-data/lib/adapters/rest_adapter.js#L457 to pass includeId: true the same way createRecord does https://github.com/emberjs/data/blob/master/packages/ember-data/lib/adapters/rest_adapter.js#L436
There are a few ways of handling this, and it comes down to personal preference. But I would recommend altering your backend code to pull the ID from the URL. Normally when you define the put request, you do it like /events/:event_id. If you did it this way on the backend, there's no need for it to be in your events object that's send over.
api/v1/events/53a9cfee701b870000dc1d01
You can capture this on the server by looking at the URL parameter that's sent over. If you're using Express on the backend, you can get the ID straight from the URL with little effort. I can't comment yet, but let me know your backend structure, and I can point you in the right direction. Possibly include the PUT request you wrote on the back end as a point of reference.
The other thing that you can do(not recommended, but will work) is to store a temporary id before doing the save.
var id = this.get('newEvent');
this.get('newEvent').set('temp_id', id).save();
And then you can access the temp_id on the server call without checking the param_id on the server call.
I working on a service which scrapes specific links from blogs. The service makes calls to different sites which pulls in and stores the data.
I'm having troubles specifying the url for updating the data on the server where I now use the verb update to pull in the latest links.
I currently use the following endpoints:
GET /user/{ID}/links - gets all previously scraped links (few milliseconds)
GET /user/{ID}/links/update - starts scraping and returned the scraped data (few seconds)
What would be a good option for the second url? some examples I came up with myself.
GET /user/{ID}/links?collection=(all|cached|latest)
GET /user/{ID}/links?update=1
GET /user/{ID}/links/latest
GET /user/{ID}/links/new
Using GET to start a process isn't very RESTful. You aren't really GETting information, you're asking the server to process information. You probably want to POST against /user/{ID]/links (a quick Google for PUT vs POST will give you endless reading if you're curious about the finer points there). You'd then have two options:
POST with background process: If using a background process (or queue) you can return a 202 Accepted, indicating that the service has accepted the request and is about to do something. 202 generally indicates that the client shouldn't wait around, which makes sense when performing time dependent actions like scraping. The client can then issue GET requests on the first link to retrieve updates.
Creative use of Last-Modified headers can tell the client when new updates are available. If you want to be super fancy, you can implement HEAD /user/{ID}/links that will return a Last-Modified header without a response body (saving both bandwidth and processing).
POST with direct processing: If you're doing the processing during the request (not a great plan in the grand scheme of things), you can return a 200 OK with a response body containing the updated links.
Subsequent GETs would perform as normal.
More info here
And here
And here
Update
Demo: http://jsbin.com/ogorab/311/edit
I'm trying to build a simple chat room that updates in realtime using Faye/Websockets. Messages are posted using regular REST, but there is also a subscription via Faye to /messages/created, which uses store.pushPayload to push the new messages.
Now the following scenario happens and I can see where it goes wrong but I have no clue how to solve it:
User submits chat message
ChatController handles the submit, calls createRecord with the chat message, and subsequently #save
The chat messages is instantly shown in the chat (triggered by createRecord). Note that no id has been assigned yet.
A REST request is send to the server
The server first publishes the message to Faye
The server responds to the REST request
Before the ajax call is resolved, a message has arrived at /messages/created
The message is again inserted in the view (it should be merged with the original message of course, but that one still hasn't been assigned an id)
The ajax call is resolved, and the id of the original message is assigned.
This results in duplicate messages, in the following order:
[message via createRecord, will resolve via ajax response]
[message inserted via pushPayload/Faye]
I hope you can understand so far. A solution would be to have Faye wait for the save call to resolve before pushing the payload. Unfortunately I don't have a reference to the record that is being saved (happens in a controller, faye subscription is set up in ApplicationRouter).
Also I would like this to work in a generic way:)
Finally found a solution for this, but other suggestions are still welcome.
Turns out that Store#didSaveRecord updates the id after the record is saved. By overriding this method (and then calling super, in that order), we can first check if a record for that id already exists:
App.Store = DS.Store.extend
didSaveRecord: (record, data) ->
# This will remove any existing records with the same id
#getById(record.constructor, data.id)?.unloadRecord()
#_super(record, data)
I have a mobile device that is constantly recording information. The information is stored on the device's local database. Every few minutes, the device will upload the data to a server through a REST API - sometimes the uploaded data corresponds to dozens of records from the same table. Right now, the server responds with
{status: "SAVED"}
if the data is saved to the server.
In the interest of being 100% sure that the data is actually uploaded (so the device won't attempt to upload it again), is that simple response enough? Or should I be hashing the incoming data and responding with it, or something similar? Perhaps I should send back the local row ids of the device's table's rows?
I think it's fine to have a very simple "SUCCESS" response if the entire request did indeed successfully save.
However, I think that when there is a problem, your response needs to include the IDs (or some other unique identifier) of the records that failed to save so that they can be queued to be resent.
If the same records fail multiple times, you might need to log the error or display it so that further action can be taken.
A successful response could be something as simple as:
<response>
<status>1</status>
</response>
An error response could be something like:
<response>
<status>0</status>
<errorRecords>
<id>441</id>
<id>8462</id>
<id>12</id>
</errorRecords>
</response>
You could get fancy and have different status codes that mean different, more specific messages.