Idempotent WebAPI - web-services

In a game scenario
When a player opens a chest, it receives an item. The item is generated randomly based on a loot table and the probability for each item of being dropped is configurable.
The main requirement is that the web service is idempotent and the loot table to be configurable at runtime.
How can this service be implemented?
My approach was to inject the loot table with the probability for each item into the query string. Also the player ID together with the chest ID can be used as a seed to generate random items.
For example:
http://[URL]/api/OpenChest?loottable=Sword:10|Shield:10|HealthPotion:30&playerId=1&chestId=1
This way the call doesn't have any side effects and the web server can cache the response as it will always return the same item for that player from a particular chest.
Is this correct? is this service idempotent? and is there any other way to implement this?

Related

Generate automatically web service parameters

I have a scenario where I have to pull data from web service using REST web service consumer transformation. For example the endpoint url is http://example/2015/Q1. Here I have to parameterise 2015/Q1 as $$DATES. But I cannot change parameter values manually. I have to design my mapping in a way that it should dynamically keep increasing the dates without doing it manually in all the runs including past to future. Please suggest me a way for the same.
You can have a parent workflow which will dynamically create a script with "pmcmd startworkflow" calls for all the quarters you need. Parent workflow will call the script to invoke the child workflow n number of times. You also need to have a table or file with all the quarters and a flag which will say if that quarter is processed or not. In the child workflow(actual one that you already have) you need to update the flag and mark it as processed. Each run of the child workflow will pick up the first unprocessed quarter and process it.
Hope that helps.

Updating front end data as backend does analysis

I've been self studying web design and want to implement something, but I'm really not sure how to accomplish it, even if I can.
The only frontend I have dealt with is angular 4, and the only backend I have dealt with is django rest framework. I have managed to get user models done in drf, and the frontend to get the user authenticated with json web tokens, and done different kinds of get and post requests.
What I want to do is on the front end have a button, when the button is hit, it will send some get request, that basically runs a text mining algorithm that will produce a list, it may take some time to fully complete, maybe in the range of 20-30 seconds, but I don't want the user to wait that long to get back the single response containing the fully compiled list.
Is it possible to say create a table in angular, and then every couple of seconds the backend sends another response containing more data, where the backend then appends the new results to that table. Something like:
00.00s | button -> GET request
01.00s drf starts analysis
05.00s drf returns the first estimated 10% of overall list
09.00s drf finds 10% more, returns estimated 20% of overall list
then repeat this process until the algorithm has stopped. The list will be very small in size, probably a list of around 20 strings, of about 15 words in each,..
I already tried in django to send multiple responses in a for loop, but the angular front end just receives the first one and then doesnt listen anymore.
No, that's not possible. For each request will be one response, not multiple.
You have two options:
- Just start your algorithm with an endpoint like /start, and check the state in an interval on an endpoint like /state
- Read about websockets or try firebase (or angularfire). This provides a two way communication

Business logic and restful API design

Let's assume we have a simple API allowing clients to fetch a list of items of a specific type:
GET /items/foo
GET /items/bar
GET /items/blah
A response is a list of items of the requested type, each entry has an unique ID.
The client will usually display these items in table/grid/etc.
Now in the client we must implement a pinning feature so another API allows pinning/unpinning items based on their ID & their type. So I was discussing with my colleagues possibilities to inform the client about which items are pinned or not.
An option was to have another API GET /pinning/{type} to return the list of all the pinned items of a specified type.
Another solution was to use a similar API GET /pinning/{type} to return the list of the IDs of all the pinned items. Let the client sort it out.
The first solution was accepted. Their argument was that the backend is responsible for business logic and that the client shouldn't be involved in business logic so the client should just display data it receives from the server. This argument didn't sell it for me. I'm thinking the server should in this case provide the data that allows the client to perform additional presentation logic.
Which solution is better? Or what other solutions are possible?
If the server would only return ItemIds at GET /pinning/{type}, the client would have to repeatedly call something like GET /items/{itemId} in order to obtain data it can display on the UI, right? This in turn would just increase the load on the server. If the id would be enough, you can probably get away with the proposed solution. Since both the client and the server seem to be under the same umbrella (as in your company is also the API consumer), you have enough information to make a decision.
Even if it were a Public API with lots of clients I would still go down the route of returning items instead of just itemIds - probably in a paged manner, for performance reasons.

Access list data as a group

We have a company program designed to help us get control over data. It has feature to group all the application of one Client. If I want to take a look at them I click on the Client and I see a list of all applications made for him. Take a look at the picture below:
I was wondering if Microsoft Access can do the same? If yes where should I start looking?
I did some internet search and no solution found.
That is built in, and it is called Subdatasheet. You have relationships properly set between Clients and Order, for instance, when you open the Clients table you will see such small "+" allowing to view the Orders of the current client. You may have to set the Subdatasheet Name property of table Clients to "Orders" in this case.
If you want to work with forms, you can build a continuous from for Clients, then one for Orders, then insert the Orders subform in the Footer of the Clients form. Access might tell you you can't do this, just ignore, it works.
In Access that would simply be a continuous form with a filter. Typically opened from a list of clients, setting a filter for the applications of the selected client.
Unless I'm misunderstanding the question.

SOA/Web Service Pagination

In SOA we should not be building or holding state (or designing dependencies) between client and server. This is understood. But what patterns can be followed in the case that a client wants to consume a real-time service that may return an open ended number of 'rows'?
Web applications, similar to SOA but allowing for state (sessions) have solved this with pagination. Pagination requires (in most cases, especially with SQL) that the server holds the data and that the client request the data in chunks.
If we where to consider pagination-like scenarios for web services, what patterns would these follow that would still allow the tenets of SOA to be adhered (or as close as possible).
Some rules for the thinkers:
1) Backed by a SQL database (therefore there is no concept of a row number in a select set)
2) It is important to not skip a row or duplicate a row in a set during pagination
3) Data may be inserted and deleted at any time into the database by other clients
4) There is no need to consider the dataset a live (update-able) dataset
Personally, I think that 1 and 2 above already spell our the solution by constraining the solution space with the requirements.
My proposed solution would have the data (as much as is selected) be stored in a read-only store/cache where it can be assigned a row number within the result set and allow pagination to occur on this data snapshot. I have would have infrastructure to store snapshots (servers, external caches, memcached or ehcache - this must scale quite large). The result of such a query would be a snapshot ID and clients could retrieve the data from the snapshot using a snapshot API (web services) and the snapshot ID. Results would be processed in a read-only, forward only manner for x records at a time where x was something reasonable.
Competing thoughts and ideas, criticisms or accolades would be greatly appreciated.
Paginated results in a Web Service is actually quite easy to achieve.
All you have to do is add two parameters to the web service call: Page Size, Page Number.
Page Size is the number of results to include in a page. Page Number is the number of the page of results you are looking for.
Your web service then goes back to the database (or cache), retreives the results, figures out which results fit on the requested page, and return only those results.
The client then has to make a single request per page of results they want from the service.
What you propose with memcached will also work with a caching table. The first service call would (1) INSERT results INTO the caching table with a snapshot ID (2) return the first page from the caching table and the snapshot ID. Subsequent calls would return pages based on page size and page number by querying the caching table using the snapshot ID.
I should think this could also be optimized by using an in-memory caching table, but that depends on whether your database supports INSERT-INTO from a disk table to an in-memory table. That might get complicated in a clustered environment though.
Such a cache is stateful by its very nature if you are retaining a client-specific copy between requests, whether storage is in a session object, database table or memcached data store. Given the requirements though, you have no choice but to cache results in some form or another, except you risk the chance of returning deleted or no-longer-relevant records as legitimate results.
SOA is not meant for such low level functionality.
SOA is meant to glue together business areas, not frontends to backends. Not because your application talks to the back end using webservices you have a "SOA" application. This is non sense since SOA is meaningless in the context of 1 isolated system.
From that point of view, it is then clear that, in SOA, the caller should not have known about the SQL table you are paginating, that’s an implementation detail that SOA should hide. In the other hand the server should not know about the client's state, because it should be agnostic to the details of the clients, to be really open.
So, just understand that pagination is not SOA. Do as you wish, just understand that the webservice you are using to paginate is an internal artifact of your application, not to be used for external clients in a SOA bus. Also remember that it can not be transaction consistent with out state in the server. Probably the problem is that you have only one service layer for the application's UI and the SOA bus, you need to separate them.
Using this webservice in a SOA bus would be bad. I can not be consistent as the user paginates and as other applications hang to it they become tied to the specific SQL.
... then you might as well have granted direct SQL access to the table for all that matters.
SOA is for business messages between systems, not to glue an application's frontend to the backend.
Same problem, resolved using the Navision approach.
$ws->getList($first_record_id, $limit)
This return a page of $limit element that start from the the passed id
select * from collection where collection.id > $first_record_id ASC limit $limit
ordered by id ASC
Navision use Key (each element has a key) but in MySQL an autoincrement id is better.
In this case pagination is intended for handle large result sets and not for a frontend pagination...
I am not sure if SOA is of concern here. The problem you have seems to be with paginating your API's. I will point you to how twitter handles their pagination dev.twitter.com/rest/public/timelines