I am defining a simple REST API to run a query and get the results with pagination. I would like to make both client and server stateless if possible.
The client sends the query, offset and length. The server returns the results. For example, if there are 1000 results and client sends offset = 10 and length = 20, the server either returns 20 results since #10 till #29 if the total number of the results >= 30 or all results since #10 if the total < 30.
Client also needs to know to the total number of the results. Since I would like to keep both client and server stateless the server will always return total with the results.
So the protocol looks like following:
Client: query, offset, length ----------->
<----------- Server: total, results
The offset and length can be defined optional. If offset is missing the server assumes it is 0. If length is missing the server returns all the results.
Does it make sense ? How would you suggest define such a protocol ?
There is no standard in REST API design.
Since it's a query, not retrieving resource by its id, the search criteria is put into query string parameter, followed by the optional offset and length parameter.
GET /resource?criteria=value&offset=10&length=3
Assume your'd like to use JSON as response presentation, the result can be like this:
{
"total":100,
"results":[
{
"index":10,
"id":123,
"name":"Alice"
},
{
"index":11,
"id":423,
"name":"Bob"
},
{
"index":12,
"id":986,
"name":"David"
}
]
}
My way to implement pagination uses implicit information.
The client can only get "Pages". NO OFFSET OR LIMIT given by client.
GET /users/messages/1
The server in giving the first page with predefined amount of elements, e.g., 10. The Offset is calculated from the page number. Therefore the client dont have to worry about total amount of elements. This information can be provided in a header. To retrieve all elements (exceptional case) the client hast to write a loop and increment the page count.
Advantages: Cleaner URI; true pagination; offset, limit, lenght are clear defined.
Disadvantages: Getting all elements is hard, flexibility lost
Dont overload URIs with meta information. URIs are for resources identification
Related
I am a coin wallet developer, and I am investigating Cosmos' transfer this time.
Cosmos has msgMultiSend as well as msgSend.
I know that MsgMultiSend sends several transfers using inputs and outputs in the form of an array.
At this time, I wonder if the order of inputs and outputs is matched one on one and guaranteed.
(i.e., whether the recipient matching the first sender of inputs is always guaranteed to be the first of outputs.)
(i.e.
transfer 1 : inputs[0] -> outputs[0]
transfer 2 : inputs[1] -> outputs[1]
...)
In cosmos 0.45.9, cosmjs 0.28.11, msgMultiSend have inputs that must be the same address. If you have multiple input addresses, you must have multiple signatures to verify them. And when I try to do this, the SDK show error BroadcastTxError: Broadcasting transaction failed with code 4 (codespace: sdk). Log: wrong number of signers; expected 1, got 2: unauthorized at CosmWasmClient.broadcastTx. But if you use the same address, It'll successful. Example on Aura Network Testnet: A070ED2C0557CFED34F48BF009D2E21235E79E07779A80EF49801F5983035F1B. Click JSON to view Raw Data.
And the sum token amount of inputs should equal the sum token amount of outputs. If it's not equal, this error will throw Broadcasting transaction failed with code 4 (codespace: bank). Log: sum inputs != sum outputs.
You can see the events data of the transaction to know more about this typeUrl.
Example:
1 input send to 19 outputs
I'm trying to get the friends of a user and append them to a list given a condition:
for friend in tweepy.Cursor(api.friends).items():
if friend not in visited:
screen_names.append(friend.screen_name)
visited.append(friend.screen_name)
However I obtain an error:
raise RateLimitError(error_msg, resp)
tweepy.error.RateLimitError: [{u'message': u'Rate limit exceeded', u'code': 88}]
Could you give me any hint on solving this problem? Thanks a lot
By default, friends method of API class, returns only list of 20 users per call, and by Twitter API you are limited to 15 calls only per window (15-minute). Thus you can only fetch 20 x 15 = 300 friends within 15-minutes.
Cursor in tweepy is another way of getting results without managing cursor value on each call to Twitter API.
You can increase the count of results fetched by per call, by including an extra parameter count.
tweepy.Cursor(api.friends, count = 200)
Maximum value of count can be 200. If you've friends more than 200 x 15 = 3000, than you need to use normal api.friends method, with maintaining cursor value and using sleep to distribute call timing. See GET friends/list page for detailed info.
Since tweepy 3.2+ you can instruct the tweepy library to wait for rate limits. This way you don't have to do that in your code.
To use this feature you would initialize your api handle as follows:
self.api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
The documentation for the new variables is below.
wait_on_rate_limit – Whether or not to automatically wait for rate limits to replenish
wait_on_rate_limit_notify – Whether or not to print a notification when Tweepy is waiting for rate limits to replenish
Per Twitter's API documentation, you have reached your query limit. It looks like the rate limits are in effect for every 15 minutes of querying, so try again in 30 minutes or use a different IP Address to hit the API. If you scroll down Twitter's documentation, you will see your code 88.
I have a server sending a multi-dimensional character array
char buff1[][3] = { {0xff,0xfd,0x18} , {0xff,0xfd,0x1e} , {0xff,0xfd,21} }
In this case the buff1 carries 3 messages (each having 3 characters). There could be multiple instances of buffers on server side with messages of variable length (Note : each message will always have 3 characters). viz
char buff2[][3] = { {0xff,0xfd,0x20},{0xff,0xfd,0x27}}
How should I store the size of these buffers on client side while compiling the code.
The server should send information about the length (and any other structure) of the message with the message as part of the message.
An easy way to do that is to send the number of bytes in the message first, then the bytes in the message. Often you also want to send the version of the protocol (so you can detect mismatches) and maybe even a message id header (so you can send more than one kind of message).
If blazing fast performance isn't the goal (and you are talking over a network interface, which tends to be slower than computers: parsing may be cheap enough that you don't care), using a higher level protocol or format is sometimes a good idea (json, xml, whatever). This also helps with debugging problems, because instead of debugging your custom protocol, you get to debug the higher level format.
Alternatively, you can send some sign that the sequence has terminated. If there is a value that is never a valid sequence element (such as 0,0,0), you could send that to say "no more data". Or you could send each element with a header saying if it is the last element, or the header could say that this element doesn't exist and the last element was the previous one.
I just embedded the Mongoose Web Server into my C++ dll (just a single header and recommended in most of the stack overflow threads) and I have it up and running properly with the very minimal example code.
However, I am having a rough time finding any sort of tutorials, examples, etc. on configuring the very basic necessities of a web server. I need to figure out the following...
1) How to allow directory browsing
2 Is it possible to restrict download speeds on the files?
3) Is it possible to have a dynamic list of IPs addresses allowed to download files?
4) How to allow the download of specific file extensions (.bz2 in this case) ANSWERED
5) How to bind to a specific IP Address ANSWERED
Most of the information I have found is in regards to using the pre-compiled binary release, so I am a bit stumped right now. Any help would be fantastic!
1) "enable_directory_listing" option
2) Not built into Mongoose (at least not the version I have, which is about 6 months old). [EDIT:] Newer versions of Mongoose support throttling download speed. From the manual...
Limit download speed for clients. throttle is a comma-separated list
of key=value pairs, where key could be:
* limit speed for all connections
x.x.x.x/mask limit speed for specified subnet
uri_prefix_pattern limit speed for given URIs
The value is a floating-point number of bytes per second, optionally
followed by a k or m character, meaning kilobytes and megabytes
respectively. A limit of 0 means unlimited rate. The last matching
rule wins. Examples:
*=1k,10.0.0.0/8=0 limit all accesses to 1 kilobyte per second,
but give connections from 10.0.0.0/8 subnet
unlimited speed
/downloads/=5k limit accesses to all URIs in `/downloads/` to
5 kilobytes per secods. All other accesses are unlimited
3) "access_control_list" option. In the code accept_new_connection calls check_acl that compares the client's IP to a list of IPs to accept and/or ignore. From the manual...
Specify access control list (ACL). ACL is a comma separated list of IP
subnets, each subnet is prepended by '-' or '+' sign. Plus means
allow, minus means deny. If subnet mask is omitted, like "-1.2.3.4",
then it means single IP address. Mask may vary from 0 to 32 inclusive.
On each request, full list is traversed, and last match wins. Default
setting is to allow all. For example, to allow only 192.168/16 subnet
to connect, run "mongoose
-0.0.0.0/0,+192.168/16". Default: ""
http://code.google.com/p/mongoose/wiki/MongooseManual
Of course as soon as I give up and post, I find most of the answers were right in front of my face. Here is the options for them...
const char *options[] =
{
"document_root", "C:/",
"listening_ports", "127.0.0.1:8080",
"extra_mime_types", ".bz2=plain/text",
NULL
};
However, I am still not sure how to make enable directory browsing. Right now, my callback function is just the basic one out of the example (as seen below). What would I need to do to get it so the files are listed?
static void *callback(enum mg_event event, struct mg_connection *conn, const struct mg_request_info *request_info)
{
if (event == MG_NEW_REQUEST)
{
// Echo requested URI back to the client
mg_printf(conn, "HTTP/1.1 200 OK\r\n"
"Content-Type: text/plain\r\n\r\n"
"%s", request_info->uri);
return ""; // Mark as processed
}
else
{
return NULL;
}
}
I want to implement an progress bar in my C++ windows application when downloading a file using WinHTTP. Any idea how to do this? It looks as though the WinHttpSetStatusCallback is what I want to use, but I don't see what notification to look for... or how to get the "percent downloaded"...
Help!
Thanks!
Per the docs:
WINHTTP_CALLBACK_STATUS_DATA_AVAILABLE
Data is available to be retrieved with
WinHttpReadData. The
lpvStatusInformation parameter points
to a DWORD that contains the number of
bytes of data available. The
dwStatusInformationLength parameter
itself is 4 (the size of a DWORD).
and
WINHTTP_CALLBACK_STATUS_READ_COMPLETE
Data was successfully read from the
server. The lpvStatusInformation
parameter contains a pointer to the
buffer specified in the call to
WinHttpReadData. The
dwStatusInformationLength parameter
contains the number of bytes read.
There may be other relevant notifications, but these two seem to be the key ones. Getting "percent" is not necessarily trivial because you may not know how much data you're getting (not all downloads have content-length set...); you can get the headers with:
WINHTTP_CALLBACK_STATUS_HEADERS_AVAILABLE
The response header has been received
and is available with
WinHttpQueryHeaders. The
lpvStatusInformation parameter is
NULL.
and if Content-Length IS available then the percentage can be computed by keeping track of the total number of bytes at each "data available" notification, otherwise your guess is as good as mine;-).