Configure istio destination rule to use a fallback URL instead of 503 - istio

I understand you can use istio to open a circuit breaker when service isn't responding. Instead of return back a 503, is it possible to redirect to a different URL? Same question but when the original service returns back a 500, can we redirect to another URL?
Or is it possible to have offline mode response provided by istio? I assume the easiest way to do this through URL redirection to a offline mode service URL, but open to ideas...

can we redirect to another URL?
If I understand correctly you're asking if it's possible to do that just with istio.
According to documentation
While Istio failure recovery features improve the reliability and availability of services in the mesh, applications must handle the failure or errors and take appropriate fallback actions. For example, when all instances in a load balancing pool have failed, Envoy returns an HTTP 503 code. The application must implement any fallback logic needed to handle the HTTP 503 error code.
And dzone.com, Christian Posta blog post:
Istio improves the reliability and availability of services in the mesh. However, applications need to handle the errors and take appropriate fallback actions. For example, when all instances in a load balancing pool have failed, Envoy will return HTTP 503. It is the responsibility of the application to implement any fallback logic that is needed to handle the HTTP 503 error code from an upstream service.
With a service mesh, at the moment without specialized libraries for failure context propagation, the failure reasons are more opaque. This doesn’t mean our application cannot take fallbacks (for both transport and client-specific errors). I’d argue it’s very important for the protocol of any application, whether using library-specific frameworks OR NOT) to always adhere to the promises it’s trying to keep for its clients. If it finds that it cannot complete its intended action, it should figure a way to gracefully degrade. Luckily, you don’t need application-specific frameworks for this. Most languages have built-in error and exception trapping and handling. Fallbacks should be implemented in these exception paths.
Sadly the answer is no, you can't. You would've to implement that in your application.
Additional resources:
demystifying istio circuit breaking
istio circuit breaker when failure is an option

Related

Istio shifting service implementation on failure or active/active and active/passive services

I want to know how I can have different implementation of the same service and switch traffic from one to the other when failure starts to occur (active / passive) or have traffic go from a 50%/50% split to a 0%/100% split when service implementation A is not responding. I would expect the 50/50 split to be restored once implementation A starts working again.
For example, I want to have a payment service and I have an implementation with Cybersource and the other with Stripe (or whatever other provider makes sense). My implementation will start returning 504 when they detect that response times on one of the providers is above a certain threshold or good old 500 because a bug occured. At that point, I want the clients to only connect to the fastest (properly working) implementation for a while and gradually retry the failed implementation once the health probe give it a green light.
Similarly for an active/passive scenario perhaps I have a search API and I want all traffic to go to implementation A. However, when that implementation starts returning 5XX, I want traffic to be routed to implementation B which is perhaps offering a degraded experience, but can be used as a backup implementation.
When I read the istio documentation / blogs, etc. I don't see the scenarios above. Perhaps Istio is not the right choice for that ?

Istio Circuit Breaker Blacklist/whitelist error codes

Is there a way in which we can provide Istio to blacklist or whitelist an error code. Since I have tried with 500(Internal Server Error) but circuit breaker is not getting open in 500 as well?
The Circuit Breaker doesn't have that kind of functionality.
Furthermore there is an issue with Error 500 not being used in Circuit Breaker. There is an issue about this on github.
We try not to expose the plethora of sometimes confusing Envoy options
to end users, in the routing api.
Within a mesh, gateway errors will be more common (502/503/504) while
most sensible external services will return a 503 to shed load.
Secondly we just made the outlier detection generic to both tcp and
http. The consecutive gateway error applies only to http and will make
no sense in tcp context.
I also feel that 500 error code is not something indicative of
overload. The whole idea behind outliers is to remove overloaded
servers from the lb pool.
We don’t have very many users relying on this behavior I think. We
kept it intentionally generic so that we can switch to a more specific
error code in future (which happens to be now).
Hope this helps.

Elasticsearch: are all possible GET requests always non-destructive, idempotent, and safe?

Are there any destructive calls that can be invoked with a HTTP GET calling Elasticsearch?
Examples of "destructive" might include actions such as delete items/delete indexes/change settings... pretty much anything that can modify settings/state of the Elasticsearch or the data.
I want to make an operational command line tool which would allow any arbitrary GET request to elastic search if GET requests are safe. So there isn't worry about DDOS attacks or anyone on purpose being malicious since operational tool will only be accessible to engineers.

Why are RESTful Applications easier to scale

I always read that one reason to chose a RESTful architecture is (among others) better scalability for Webapplications with a high load.
Why is that? One reason I can think of is that because of the defined resources which are the same for every client, caching is made easier. After the first request, subsequent requests are served from a memcached instance which also scales well horizontally.
But couldn't you also accomplish this with a traditional approach where actions are encoded in the url, e.g. (booking.php/userid=123&travelid=456&foobar=789).
A part of REST is indeed the URL part (it's the R in REST) but the S is more important for scaling: state.
The server end of REST is stateless, which means that the server doesn't have to store anything across requests. This means that there doesn't have to be (much) communication between servers, making it horizontally scalable.
Of course, there's a small bonus in the R (representational) in that a load balancer can easily route the request to the right server if you have nice URLs, and GET could go to a slave while POSTs go to masters.
I think what Tom said is very accurate, however another problem with scalability is the barrier to change upon scaling. So, one of the biggest tenants of REST as it was intended is HyperMedia. Basically, the server will own the paths and pass them to the client at runtime. This allows you to change your code without breaking existing clients. However, you will find most implementations of REST to simply be RPC hiding behind the guise of REST...which is not scalable.
"Scalable" or "web scale" is one of the most abused terms when it comes to the web, the cloud and REST, and mainly used to convince management to get their support for moving their development team on board the REST train.
It is a buzzword that holds no value. If you search the web for "REST scalability" you'll find a lot of people parroting each other without any concrete evidence.
A REST service is exactly equally scalable as a service exposed over a SOAP interface. Both are just HTTP interfaces to an application service. How well this service actually scales depends entirely on how this service was actually implemented. It's possible to write a service that cannot scale as all in both REST and SOAP.
Yes, you can do things with SOAP that makes it scale worse, like rely on state and sessions. SOAP out of the box does not do this. This requires you to use a smarter load balancer, which you want anyway if you're really concerned with whatever form of scaling.
One thing that REST allows that SOAP doesn't, and that some other answers here address, is caching cacheable responses through an HTTP caching proxy or at the client side. This may make a REST service somewhat more lightly loaded than a SOAP service when a lot of operations' responses are cacheable. All this means is that fewer requests end up in your service.
The main reason behind saying a rest application is scalable is, Its built upon a HTTP protocol. Because HTTP is stateless. Stateless means it wont share anything between other request. So any request can go to any Server in a load balanced cluster. There is nothing forcing this user request go to this server. We can overcome this by using token.
Because of this statelessness,All REST application are very easy to scale. But if you want get high throughput(number of request capable in one second) in each server, then you should optimize blocking things from the application. Follow the following tips
Make each REST resource is a small entity. Don't read data from join of many tables.
Read data from near by databases
Use caches (Redis) instead of databases(You can save DISK I/O)
Always keep data sources as much as near by because these blocks will make server resources (CPU) ideal and it no other request can use that resource while it is ideal.
A reason (perhaps not the reason) is that RESTful services are sessionless. This means you can easily use a load balancer to direct requests to various web servers without having to replicate session state among all of your web servers or making sure all requests from a single session go to the same web server.

Web service provider routing

I am looking to implement a service (web/windows, .net) that maintains a list of available services and can provide an endpoint based on the nature or type of request. The requester can then pass the actual work request to the provided endpoint. The actual work requests can contain very large chunks (from 10MB up to and possibly exceeding a GB) of data.
WCF routing services sounds like a perfect fit, but turns out not to be because the it requires the actual work request to pass through it, creating a bottleneck at the routing service (the whole point is to get a system to be able to scale out). If I had smaller messages, WCF routing would be a no brainer.
Is there anything out there that fits the bill? Preferably .NET/windows based?
Do you mean because the requests block for work?
Do could use OneWay OperationContract to create async services so as to not block the request pool.
[ServiceContract]
interface IMyContract
{
[OperationContract(IsOneWay = true)]
void DoWork()
}
Update
I think understand your question better now, you are looking to distribute load to different servers to avoid request bottle necks due to heavy traffic load (preferably distributed based on content).
I'd say that MVC Routing is indeed ideal for this. One of the features that you can leverage is the fall over functionality. You can actually define multiple backup endpoints, and in the case where one fails, it will automatically move over to the next. There's a good introduction to how this works here.
There's also a good article here that talks about load balancing with WCF using the same principles. It provides 2 solutions for a round robin filter implementation that allows you to load balance the service requests (even though at the begin he says his general answer to whether it supports load balancing is no for implementation reasons).
If you are worried about all requests routing via the one server and still becoming a bottle neck, then think of web load balancers. It's the same scenario. Sitting in the middle forwarding packets doesn't require much work, and they have no problem handling huge volumes of traffic. I don't think this is an issue IMO.