Let's say i want to send Data to a server, then train a NN with that Data and later on want to get prediction results.
That would break Rest because the Data for a request are Not evertime the Same for every request (especially of there would bei predictions while training) right?
Is there a good Standard oft there for that or should i rather implement a custom Server?
Related
I am currently working with Django but I am stuck as I don't know if I am pursuing the right model given the nature of my application.
Problem Statement:
I have to make a REST API for a client such that whenever I get a trigger for a new entry/entries in my Database I have to send those to the client which is supposed to listen to a URL and has asked for data only once and now is open to receive data whenever available to him.
It is not sending a GET request now and then.
There will be different endpoints of the APIs. One is where I provide him all the new data available to me and in other where it asks for specific data (ex: '/myAPI/givemethis')
I can easily implement the second requirement as it is a simple request-response case.
I am not sure how to send him data that too on availability without him making a repeated request as client.
It appears a Publisher-Subscriber model is better suited for my use case but I don't know how to implement it on Django.
I bumped into several concepts like StreamingServices and MQTT but I am not sure what should be the right choice to go with.
Kindly provide some suggestions.
We want to integrate a 3rd party service, regarding payments, their API waiting PAN & expiration date, and we need to determinate what PCI level do we need?
So, we just collect this data on client, send them to our server which will send data to them, we do not store it in database.
If your server can see this data, you need PCI SAQ-D, end of story. It doesn’t matter if you’re storing it or not, what matters is that someone who compromises your server can see it in transit. And if you’re asking this question, you do not want to be responsible for all the requirements of D.
To qualify for SAQ-A, or SAQ-A-EP, which are the only other two valid for websites, the card data needs to never come to your server in a readable form. That could mean redirecting the user to a page hosted by your payment processor to enter their data, embedding an iframe they provide, posting it directly to them from the front end (i.e. JavaScript POST), or (maybe) encrypting it with a key that only they can decrypt.
More information can be found in the official summary document
Consider a RESTful Web Service processing large documents on the server side. It could be a document converter accepting multi-paged or single-paged digital images and converting them to PDF. The user has the possibility to compose the final PDF from several images by inserting them into the virtual document via REST. This means that API users will make several requests before the conversion can be started.
Now my question:
I need to signal the Web Service to start document processing. Because such a processing can take some time (considering a video converter, for example), some kind of monitoring is required, in order to be able to display progress information in the front-end.
How is this done in the modern RESTful Web Services? Or, in other words, is it possible to implement this nicely in the RESTful world (i.e. without resorting to some sort of RPC)?
I'd appreciate real examples and useful links.
202 Accepted
The 202 (Accepted) status code indicates that the request has been accepted for processing, but the processing has not been completed.
The representation sent with this response ought to describe the request's current status and point to (or embed) a status monitor that can provide the user with an estimate of when the request will be fulfilled.
In short, there is a "report on the progress of this instance of the process" resource, which the client can monitor.
I have never used HATEOAS with RESTAPI's and what I understand is with HATEOAS, one doesn't need to store URI's and server send's the URI's in the response which can be used to fetch other resources or related resources.
But with HATEOAS, aren't we increasing the number of calls?
If I want to fetch customer-order information and if I first fetch customer information and get URI for it's orders dynamically, isn't it an extra call?
Loose coupling can be understood but I do not understand the exact use of this Maturity level of REST.
Why should HATEOAS increase the number of required requests? Without the service returning URIs the client can use to perform a state trransition (gather further information, invoke some tasks, ...) the client has to have some knowledge on how to build a URI itself (hence it is tightly coupled to the service) though the client still needs to invoke the endpoint on the server side. So HATEOAS just moves the knowledge on how to generate the URI from client to server.
Usually a further request sent to the server isn't really an issue as each call should be stateless anyway. If you have a load-balanced server structure, the additional request does not really have a noticable prerformance impact on the server.
If you do care about the number of requests issued by a client to the server (for whatever reason) you might have a look at i.e. HAL JSON where you can embed the content of sub-resources, though in the case of customer orders this might also have a significant performance impact as if users may have plenty of issued orders stored the response might be quite huge and the client has to administer all of the data even though it might not use it. Usually instead of embedding lots of list items within a response the service will point the client to a URI where the client can learn how to retrieve these information if needed. Often this kind of URIs provide a pageable view on the data (like orders placed by a customer).
While a pageable request for sure increase the number or requests handled by the service, overall performance will increase though as the service does not have to return the whole order-data to the client and therefore reduce the load on the backing DB as well as shrinking the actual response content length.
To sum my post up, HATEOAS is intended to move the logic of creating URIs to invoke from clients to servers and therefore decouple clients from services further. The number of actual requests clients have to issue isn't tide to HATEOAS but to the overall API design and the requirements of the client.
We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing.
Now, the problem is, there is not 1 Web Service we are implementing, there is one Web Service which the client component hits, this initiates a series (5 more) of Web Services which gather data from their respective data stores and finally provide the data back to the original Web Service, which then delivers the data back to the client component.
So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel.
So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal Web Service and at the same time, also delivering the data to the client component.
Update 1
Using 5 WS, where, 1WS does not know about the others, except the next one is a business requirement. Actually, 5 companies "small services" are being integrated.
We use Java and Axis2
We've had a similar problem. Apart from trying to avoid it (eg for internal communication go direct to db instead of web service) you can mitigate it by at least not performing the 5 or so tasks in series. Make new threads to collect them all in parallel and process them at the end to reduce latency (except where they might contend for the same resource and bottle neck).
But before I'd do anything load test it and see if it is even an issue and get some baseline stats so you can see what improvement each change makes. Also sometimes you might be better off tweaking network settings or the actual network rather than trying to optimise the code - but again test and see.
Put all the data on a temporary compressed file and give back the ftp url of the file.
The client fetches the big data chunk uncompress it and reads it. (maybe some authentication mechanism for the ftp server)