number-of-matches-within-limits requesting CardDav server - icloud

I recently tried to read alot of VCards from iCloud CardDav server. However I realized there is a limit on the 5000th vCard I request.
Here is my REPORT request:
<?xml version="1.0" encoding="utf-8" ?>
<C:addressbook-query xmlns:D="DAV:" xmlns:C="urn:ietf:params:xml:ns:carddav">
<C:filter>
<C:prop-filter name="X-ADDRESSBOOKSERVER-KIND" test="anyof">
<C:is-not-defined />
<C:text-match collation="i;unicode-casemap" match-type="equals" negate-condition="yes">group</C:text-match>
</C:prop-filter>
</C:filter>
</C:addressbook-query>
And here is the end of the server answer:
<response>
<href>/872816606/carddavhome/card/</href>
<status>HTTP/1.1 507 OK</status>
<error><number-of-matches-within-limits/></error>
</response>
Is there a way to query the next page?
Thanks in advance.

CardDAV all by itself does not offer a paging mechanism. WebDAV Sync (https://www.rfc-editor.org/rfc/rfc6578) on the other hand does provide paging, although not all server implementations do support paging.
Now, if understand your query, you are pretty much asking for all contacts in the collection, except for groups. Unless you have a very large number of groups, you probably want to filter them on the client side, in which case you can use a regular PROPFIND (or WebDAV Sync REPORT), followed by series of CarDAV multiget REPORTs, which is what most clients do.

Related

How to force all user's browsers to refresh for software update

I have a number of web applications that run for a number of businesses, day in and day out.
The applications are in PHP/MySQL/JS Running on a remote apache server.
For many years, I have performed updates at late night when the software is not in use.
I would like to be able to perform updates to the software during working hours, if possible.
I have many times asked my clients to make sure they shut the software down at night, and close their browsers - but can never guarantee that they have done so.
I have a refresh timer in the JS that trigger a browser to refresh at 11:59. It will happen If the browser is still open.
But I would like able to perform this refresh at any open browser - when I want.
I have mulled over a few ways to do this - including cron and database values that can be read and reset - but:
I wonder if anyone has had success with achieving this?
You want to refresh all open browser tabs that are pointing at your xAMP-ish applications. A few questions:
Does the refresh need to be immediate, or can it be deferred? that is, do everyone's tabs need to be refreshed at the same time, regardless of user interaction; or is it acceptable to wait until the next request from each client, whenever it may be?
Can you schedule the refresh ahead of time (say, with at least 1 session-timeout interval lead-up time), or do you need a method that triggers refreshes immediately?
If you require immediate refreshes, with no ahead-of-time scheduling, you are out of luck. The only way to do this is to keep an open channel for asynchronous updates from the server to the clients, which is hard to do with plain Apache/PHP (see comet, websockets).
If you can make do with deferred refreshes (waiting until a user submits a request), you have several alternatives. For example, you can
expire all sessions (by calling a script that removes all the corresponding server-side session files; found in /var/lib/php/sessions/ in linux). Note that your users will not appreciate losing, say, their shopping-cart contents.
use JavaScript to check a client-side version value (loaded at login-time, and kept in localStorage or similar) against incoming replies from the server (which would load it from a configuration file or a DB request). If the server-side value has changed, save whatever can be saved to localStorage (to avoid the previous scenario), inform the user, and refresh the page.
Alternatively, if you can schedule the refreshes with enough fore-warning, you can include instructions in server-replies that will invoke the refresh mechanism when needed. For example, such replies could change your current "reset at 11:59:59" code to read "reset at $requested_reset_time".
As I understand the problem, you would want control over when the user sees 'fresh' content and when the cached stuff is okay. If this is right,
Add the following in your head content -
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate" />
<meta http-equiv="Pragma" content="no-cache" />
<meta http-equiv="Expires" content="0" />
Upon receiving this header, user's browser will automatically fetch a fresh content. And you can flip on/off the above lines to suit your needs. This might not be the most sophisticated way of achieving the desired functionality but worth trying.
There are a lot of things to consider before doing something like this. For example, if someone is actively working on a page, maybe filling out a form or something and you were able to refresh their window, that could create a negative user experience. I believe some of the other answers here addressed some other concerns as well.
That said, I know from working with the Launch Darkly feature flag service that it can be done. I don't understand all the inner workings, unfortunately, but my understanding is that the service uses observables to watch for updates. Observables are similar to promises, except they continuously watch for new changes to their target. You could then force a page reload (or perhaps an alert to the user, prompting one) when the target updates.

Unencrypted Communications are disallowed

I am trying to leverage the IDOL Web API (if that is its name) so that I can programmatically validate document creation/modification actions triggering an IDOL crawl of said document. We had a situation where a few documents were seemingly crawled by IDOL, but never made it to one of the 13 content engines. What I want to do is execute a script hourly to quickly validate that all documents created/modified in the preceding hour are in the IDOL index. This should be "easy" based upon some of the examples I have found.
Executing this http://[server]:9000/ACTION=licenseInfo and I get this response in the browser:
<?xml version="1.0" encoding="UTF-8" ?>
- <autnresponse xmlns:autn="http://schemas.autonomy.com/aci/">
<action>LICENSEINFO</action>
<response>ERROR</response>
- <responsedata>
- <error>
<errorid>IDOLPROXYLICENSEINFO-2147441838</errorid>
<rawerrorid>0x8000A352</rawerrorid>
<errorstring>Unencrypted communications are disallowed</errorstring>
<errorcode>ERRORENCRYPTIONFAILED</errorcode>
<errortime>08 Aug 12 10:23:02</errortime>
</error>
</responsedata>
</autnresponse>
Executing this http://[server]:9000/ACTION=query&text=toys and I get this:
<?xml version="1.0" encoding="UTF-8" ?>
- <autnresponse xmlns:autn="http://schemas.autonomy.com/aci/">
<action>QUERY</action>
<response>ERROR</response>
- <responsedata>
- <error>
<errorid>IDOLPROXYQUERY-2147441838</errorid>
<rawerrorid>0x8000A352</rawerrorid>
<errorstring>Unencrypted communications are disallowed</errorstring>
<errorcode>ERRORENCRYPTIONFAILED</errorcode>
<errortime>08 Aug 12 10:26:40</errortime>
</error>
</responsedata>
</autnresponse>
Is there something I am missing in my IIS setup?
I asked this question in another forum that seems to be more active for users of the Autonomy/Interwoven suite of products. Basically, the version of the IDOL indexer that comes built-in with the WorkSite Indexer does not allow Web API calls.

What is the right way to confirm a submission to a web API?

I have a mobile device that is constantly recording information. The information is stored on the device's local database. Every few minutes, the device will upload the data to a server through a REST API - sometimes the uploaded data corresponds to dozens of records from the same table. Right now, the server responds with
{status: "SAVED"}
if the data is saved to the server.
In the interest of being 100% sure that the data is actually uploaded (so the device won't attempt to upload it again), is that simple response enough? Or should I be hashing the incoming data and responding with it, or something similar? Perhaps I should send back the local row ids of the device's table's rows?
I think it's fine to have a very simple "SUCCESS" response if the entire request did indeed successfully save.
However, I think that when there is a problem, your response needs to include the IDs (or some other unique identifier) of the records that failed to save so that they can be queued to be resent.
If the same records fail multiple times, you might need to log the error or display it so that further action can be taken.
A successful response could be something as simple as:
<response>
<status>1</status>
</response>
An error response could be something like:
<response>
<status>0</status>
<errorRecords>
<id>441</id>
<id>8462</id>
<id>12</id>
</errorRecords>
</response>
You could get fancy and have different status codes that mean different, more specific messages.

Apache camel to aggregate multiple REST service responses

I m new to Camel and wondering how I can implement below mentioned use case using Camel,
We have a REST web service and lets say it has two service operations callA and callB.
Now we have ESB layer in the front that intercepts the client requests before hitting this actual web service URLs.
Now I m trying to do something like this -
Expose a URL in ESB that client will actually call. In the ESB we are using Camel's Jetty component which just proxies this service call. So lets say this URL be /my-service/scan/
Now on receiving this request #ESB, I want to call these two REST endpoints (callA and callB) -> Get their responses - resA and resB -> Aggregate it to a single response object resScan -> return to the client.
All I have right now is -
<route id="MyServiceScanRoute">
<from uri="jetty:http://{host}.{port}./my-service/scan/?matchOnUriPrefix=true&bridgeEndpoint=true"/>
<!-- Set service specific headers, monitoring etc. -->
<!-- Call performScan -->
<to uri="direct:performScan"/>
</route>
<route id="SubRoute_performScan">
<from uri="direct:performScan"/>
<!-- HOW DO I??
Make callA, callB service calls.
Get their responses resA, resB.
Aggregate these responses to resScan
-->
</route>
I think that you unnecessarily complicate the solution a little bit. :) In my humble opinion the best way to call two independed remote web services and concatenate the results is to:
call services in parallel using multicast
aggregate the results using the GroupedExchangeAggregationStrategy
The routing for the solution above may look like:
from("direct:serviceFacade")
.multicast(new GroupedExchangeAggregationStrategy()).parallelProcessing()
.enrich("http://google.com?q=Foo").enrich("http://google.com?q=Bar")
.end();
Exchange passed to the direct:serviceFacadeResponse will contain property Exchange.GROUPED_EXCHANGE set to list of results of calls to your services (Google Search in my example).
And that's how could you wire the direct:serviceFacade to Jetty endpoint:
from("jetty:http://0.0.0.0:8080/myapp/myComplexService").enrich("direct:serviceFacade").setBody(property(Exchange.GROUPED_EXCHANGE));
Now all HTTP requests to the service URL exposed by you on ESB using Jetty component will generate responses concatenated from the two calls to the subservices.
Further considerations regarding the dynamic part of messages and endpoints
In many cases using static URL in endpoints is insufficient to achieve what you need. You may also need to prepare payload before passing it to each web service.
Generally speaking - the type of routing used to achieve dynamic endpoints or payloads parameters in highly dependent on the component you use to consume web services (HTTP, CXFRS, Restlet, RSS, etc). Each component varies in the degree and a way in which you can configure it dynamically.
If your endpoints/payloads should be affected dynamically you could also consider the following options:
Preprocess copy of exchange passed to each endpoint using the onPrepareRef option of the Multicast endpoint. You can use it to refer to the custom processor that will modify the payload before passing it to the Multicast's endpoints. This may be good way to compose onPrepareRef with Exchange.HTTP_URI header of HTTP component.
Use Recipient List (which also offers parallelProcessing as the Multicast does) to dynamically create the REST endpoints URLs.
Use Splitter pattern (with parallelProcessing enabled) to split the request into smaller messages dedicated to each service. Once again this option could work pretty well with Exchange.HTTP_URI header of HTTP component. This will work only if both sub-services can be defined using the same endpoint type.
As you can see Camel is pretty flexible and offers you to achieve your goal in many ways. Consider the context of your problem and choose the solution that fits you the best.
If you show me more concrete examples of REST URLs you want to call on each request to the aggregation service I could advice you which solution I will choose and how to implement it. The particularly important is to know which part of the request is dynamic. I also need to know which service consumer you want to use (it will depend on the type of data you will receive from the services).
This looks like a good example where the Content Enricher pattern should be used. Described here
<from uri="direct:performScan"/>
<enrich uri="ServiceA_Uri_Here" strategyRef="aggregateRequestAndA"/>
<enrich uri="ServiceA_Uri_Here" strategyRef="aggregateAandB"/>
</route>
The aggregation strategies has to be written in Java (or perhaps some script language, Scala/groovy? - but that I have not tried).
The aggregation strategy just needs to be a bean that implements org.apache.camel.processor.aggregate.AggregationStrategy which in turn requires you to implement one method:
Exchange aggregate(Exchange oldExchange, Exchange newExchange);
So, now it's up to you to merge the request with the response from the enrich service call. You have to do it twice since you have both callA and callB. There are two predefined aggregation strategies that you might or might not find usefull, UseLatestAggregationStrategy and UseOriginalAggregationStrategy. The names are quite self explainatory.
Good luck

How to re-request room roster and history from a muc in ejabberd

When a user joins an ejabberd MUC, the server will send a full room roster and chat history to the user.
In my web based client I need to persist the room over page reloads. My problem is that I loose al the initial information when the page is unloaded.
ATM I'm working around this by serialising the roster and room history to json and storing it in a cookie. However, this is a really bad idea (tm) as I can very quickly exceed the 4k general cookie limit for rooms with alot of users.
So the question: How can I re-request the information the server sends a user on join, without actually rejoining a MUC?
One approach for rosters would be to send a query iq with a namespace of "http://jabber.org/protocol/disco#items" but this is incomplete as it doesn't provide presence information or any extended info (such as real jids for non-anonymous rooms)
On page unload you need send "presence unavailable"
On page load (rejoin to room) send "presence available" plus "history" request. For example,
<history maxstanzas=20 />
Reference to XEP-0045 scheme
Hmm. I don't have a solution for Roster, but on the history one, have you tried this?
<iq to="room#conference.xmpp.org" type="get">
<history xmlns="http://www.jabber.com/protocol/muc#history" start="1970-01-01T00:00:00Z" direction="forward" count="100" />
</iq>
Try leaving the muc room when page unload and re-send presence to the muc when page re-load.