express-gateway : Consolidate requests - express-gateway

Is there a way to chain / consolidate multiple REST calls? For example, expose an api that would accept all order details and then when that api is called, make multiple calls for different steps like add to cart, checkout etc and when it’s all done, send a response back.

unfortunately we're not offering such feature just yet, although it's highly requested, so I feel we will start to work it kind of soon.
In meantime, you can reach almost the same feature through a plugin that you can write by yourself.

Related

Express Gateway policy

I am testing out EG policies for my microservices app. One requirement is that whenever express gateway receives a request, I want to invoke a particular service, parse its result, and based on the result decide to proceed for downstream calls or return an error. It appears to be a very standard requirement. Is there any existing policy (could not find any here) for such scenarios or do I need to write a custom one? Thanks
this is Vincenzo — I am the maintainer of Express Gateway :)
Unfortunately you spotted a lack of Express Gateway, which is "post proxy" policies. Fundamentally right now the proxy policy is the last one to be executed and there's nothing else you can do before sending the request to the downstream client.
This is a limitation that we definitely need to fix, although you're the first one to bring up this use case.
This does not mean that you cannot do it now. I think it'd be kind of easy as well, but unfortunately you'd need to fork the Gateway and add some code.
If you could articulate a little bit more your use case, we might evaluate if there's a way to make it happen in the next release :)

Django - Connecting project apps with only REST API

I mainly have two questions. I haven't read this anywhere but, I am wondering whether or not it is a good idea to make it so that all the data that is going in and out of all apps in your project solely depended on REST API calls.
So that if you, for instance, want to register a new user. Gather the data from a front-end, with no back-end work, and just send this data as a REST call to you "registration-app" where all validation and back-end work is done.
I find this method effective when working in big teams as it makes dependencies even more decoupled, as well as making each part of the project more separated and "clear". My question is, therefore, is this a viable way of developing? Are there any security or performance issues with this? And where can I read more?
Thanks
Max
It is perfectly viable, I think like most choices it has pros and cons. Here are some of them:
Pros:
Decoupling - Clients depend on the abstraction (i.e. the REST API) rather than the concretion (i.e. the website), so you gain clarity of design, ability to test outside of the browser, and you can do things like substitute the REST API with different implementations e.g. with a mock service for development/testing purposes. If, in addition, the REST API is implemented by a separate back-end service, then you can update it independently, and potentially scale it independently.
Responsive user-interface - The REST requests can avoid HTML page reloads and improve UX. Also you can make asynchronous REST calls.
Reduced payload - Typically the REST calls would return less data than the HTML sent in a page refresh.
Cons:
More complex client - You require more complex javascript and especially so if you employ asynchronous REST calls.
Dynamic page building - Typically the result of a REST call might require some change in the UI, you are forced to do this dynamically in javascript which also adds complication. So your UI logic is split between your HTML page templates and your javascript UI updates. This makes the UI hard to reason about.
Timeouts - You need to handle timeouts and errors in javascript
Sessions - You need some means of authenticating users and maintaining sessions. REST services should not maintain client-session state themselves, so you either need to store state in the client, or explicitly add state as a new REST resource with its own distinct URI(s).
Forced page reload - If you use this mechanism to avoid page reloads, then users potentially might have the page open for a significant period of time, and you might need some kind of mechanism to cause them to reload it.

Java web application for multiple users

I need to design and implement a Java web application that can be used by multiple users at the same time. The data that is handled by this application is going to be huge and may take about 5 minutes for a page to display the results(database records).
I had designed this application using HTML, Servlets and JSP. But when two users would try to get the records, only one user was able to view the results while the other faced an error.
I always thought a web application would take care of handling multiple users but this is not the case.
Any insights on this would be highly appreciated.
Thanks.
I always thought a web application would take care of handling multiple users but this is not the case.
They do if they're written correctly. Obviously yours is not. That's all we can tell you unless you give more information, most importantly details of the error shown to the second user.
One possibility is that everything is OK on the web layer but your DB access for the first user causes an exclusive lock so that the second user cannot access the data at the same time. This could be fixed by using non-exclusive read locks. How to do that depends mainly on what DB you're using.
Getting concurrency right requires you to choose the correct tools and use them correctly. It doesn't just happen magically because it's a web app.
What are are using to develop this web-application? If you are developing it in your own way from the start I must say you are trying to re-invent the same wheel which has been already created and enhanced by very solid frameworks.
I suggest you analyze your requirements thoroughly and study some available frameworks. Let them handle the things like multi threading and other aspects in the best possible manner.
Handling multiple request at a time is a container work and as an application developer we have to concentrate how we are handling and processing those requret being forwarded by the container.
I must suggest you to get some insight how web-application work and how request -response cycle happens

Web Service vs Form posting

I have 2 websites(www.mysite1.com and myweb2.com, both sites are in ASP.NET with SQL server as backend ) and i want to pass data from one site to another.Now i am confused whether to use a web service or a form posting (from mysite1 to a page in myweb2)
Can any one tell me the Pros and Cons of both ?
By web service I assume you mean SOAP based web service?
Anyway both are equal with few advantages. Posting is more lightweight, while SOAP is standardized (sort of). I would go with more restful approach, because I think SOAP is too much overhead for simple tasks while not giving much of advantage.
Webservices are SOAP messages (the SOAP protocol uses XML to pass messages back and forth), so your server on both ends must understand SOAP and whatever extensions you want to talk about between them, and they probably (but don't have to) be able to grok WMDL files (that "explains" the various services endpoints and remote functionality available). Usually we call this the SOAP / WS-* stack, with emphasis on 'stack' as there's a few bits of software that needs to be available, and the more complex the SOAP calls, the more of this stack needs to be available and maintained.
Using POST, on the other hand, is mostly associated with RESTful behaviours, and as an example of a protocol of such, look to HTTP. Inside the POST you can of course post complex XML, but people tend to use plain POST to simplify the calling, and use HTTP responses as replies. You don't need any extra software, probably, as most if not all webkits has got HTTP support. My own bias leans towards REST, in case you wonder. Through using HATEOAS you can create really good infrastructure for self-aware systems that can modify themselves with load and availability in real-time as opposed to the SOAP way, and this lies at the centre of the argument for it; HTTP was designed for large distributed networks in mind, dealing with performance and stability. SOAP tends to be a one-stop if-it-breaks-you're-stuffed kinda thing. (Again, remeber my bias. I've written about this a lot on my blog, especially the architecture side and the impact of SOA vs. ROA. :)
There's a great debate as to which is "better", to which I can only say "it depends completely on what you want to do, how you prefer to do it, what you need it to do, your environment, your experience, the position of the sun and the moon(s), and the mood my cat is in." Eh, meaning, a lot.
I'm all for a healthy debate about this, but I tend to think that SOAP is a reinvention; SOAP is an envelope with a header and body, and if that sounds familiar, it is exactly how HTML was designed, a fact very few people tend to see. HTTP as just a protocol for shifting stuff around is well understood and extremely well supported, and SOAP uses it to shift their XML envelopes around. Is there a real difference between shifting SOAP and HTML around? Well, yes, the big difference is that SOAP reinvents all the niceties of HTTP (caching, addressability, state, scaling), and then use HTTP only for delivering the message and nothing else and let the stack itself have to deal with those niceities mentioned earlier. So, a lot of the goodness of HTTP is ignored and recreated in another layer (hence, you need a SOAP stack to deal with it), which to me seems wasteful, ignorant and adding complexity.
Next up is what you want to do. For really complex things, there's lots in the webservices stack of standards (I think it's about 1200 pages combined these days) that could help you out, but if your needs are more modest (ie. not that crazy about seriously complex security, for example) a simple POST (or GET) of a request and an envelope back with results might be good enough. Results in HTTP is, as you probably know, HTTP content-type, so lots is already supported but you can create your own, for example application/xml+myformat (or more correctly, application/x-xml+myformat if I remember correctly). Get the request, if it's a response code 200, and parse.
Both will work. One is heavy (WS-* stack) depending on what your needs are, the other is more lightweight and already supported. The rest is glue, as they say.
I would say the webservice is definitely the best choice. A few pro's:
If in the future you need to add another website, your infrastructure (the webservice) is already there
Cross-site form posting might give you problems when using cookies or
might trigger browser privacy restrictions
If you use form posting you have to
write the same code over and over
again, while with using the
webservice you write the code once,
then use it at multiple locations.
Easier to maintain, less code to
write.
Maintainability (this is related to
the above point) ofcourse, all the
code relevant to exchanging data is
all at one location (your webservice)
There's probably even more. Like design time support/code completion.
From my little experience I'd say that you'd be best using a web service since you can see the methods and structure of the service in your code, once you've created it at the recieving end that is.
Also using the form posting methos would eman you have to fake form submissions which isn't as neat as making a web service call.
Your third way would be to get the databases talking, though I'm guessing they're disparate and can't 'see' each other?
I would suggest a web service (or WCF). As Beanie said, with a service you are able to see the methods and types of the service (that you expose), which would make for much easier and cleaner moving of data.
I agree with AlexanderJohannesen that it is debatable whether SOAP webservices or RESTful apis are better, however if both sites are under your control and done with asp.net, definitely go with SOAP webservices. The tools that Visual Studio provides for creating and consuming webservices are just great, it won't take you more than a few minutes to create the link between the two sites.
In the site you want to receive communication create the web service by selecting add item in VS. Select web service and name it appropriately. Then just create a method with the logic you want to implement and add the attribute [WebMethod], eg.
[WebMethod]
public void AddComment(int UserId, string Comment) {
// do stuff
}
Deploy this on your test server, say tst.myweb2.com.
Now on the consuming side (www.myweb1.com), select Add Web Reference, point the url to the address of the webservice we just created, give it a name and click Add refence. You have a proxy class that you can call just like a local class. Easy as pie.

Developing/Testing Twitter apps without slamming the API

I'm currently working on an app that works with Twitter, but while developing/testing (especially those parts that don't rely heavily on real Twitter data), I'd like to avoid constantly hitting the API or publishing junk tweets.
Is there a general strategy people use for taking it easy on the API (caching aside)? I was thinking of rolling my own library that would essentially intercept outgoing requests and return mock responses, but I wanted to make sure I wasn't missing anything obvious first.
I would probably start by mocking the specific parts of the API you need for your application. In fact, this may actually force you to come up with a cleaner design for your app, because it more or less requires you to think about your application in terms of "what" it should do rather than "how" it should do it.
For example, if you are using the Twitter Search API, your application most likely should not care whether or not you are using the JSON or the Atom format option. The ability to search Twitter using a given query and get results back represents the functionality you want, so you should mock the API at that level of abstraction. The output format is just an implementation detail.
By mocking the API in terms of functionality instead of in terms of low-level implementation details, you can ensure that the application does what you expect it to do, before you actually connect to Twitter for real. At that point, you've already verified that the app works as intended, so the only thing left is to write the code to make the REST requests and parse the responses, which should be fairly straightforward, so you probably won't end up hitting Twitter with a lot of junk data at that point.
Caching is probably the best solution. Besides that, I believe the API is limited to 100 requests per hour. So maybe make a function that keeps counting each request and as it gets close to 100, it says, OK, every 10 API requests I will pull data. It wouldn't be hard set, probably a gradient function that curbs off when you are nearing the limit.
I've used Tweet#, it caches and should do everything you need since it has 100% of twitter's api covered and then some...
http://dimebrain.com/2009/01/introducing-tweet-the-complete-fluent-c-library-for-twitter.html
Cache stuff in a database... If the cache is too old then request the latest data via the API.
Also think about getting your application account white-listed, it will allow you to have a 20,000 api request limit per hour vs the measly 100 (which is made for a user not an application).
http://twitter.com/help/request_whitelisting