Orchestrating sequential/parallel destinations with Istio? - istio

I'm looking into gateways and one of the requirements is the ability to orchestrate calls. For example, one incoming request should be a sequence of calls to different pods.
With Istio, I looked all over the docs and cannot find anything regarding that.
Is that possible with Istio, and how?
Examples with other gateways:
KrakenD's sequential proxy
ApiSix's Plugin Pipeline Request

Related

Istio extension to provide session affinity among replicas without using consistent hash

I have microservices that need to be stateful. I also want the microservices to be scalable such that replicas can be added or removed as needed, primarily for auto-scaling. What I'm looking for is either an Istio extension that would support this, or an example Istio extension that I could start with to create such an extension. My thoughts are that when a replica gets a request, if subsequent requests need to come back to that same replica, I'd have the microservice include it's hostname(?) in a response header. The caller would pass that header back in subsequent requests and the extension would notice that header and alter the routing to route the request to the specific replica that needs to receive the request.
I've looked at the consistent hashing mechanism Istio provides, but that won't yield consistent routing as far as I can tell when the number of replicas changes.
Certainly others must have a similar requiremnt.

AWS - Server to Server pub-sub

We do have a system that is using Redis pub/sub features to communicate between different parts of the system. To keep it simple we used the pub/sub channel to implement different things. On both ends (producer and consumer), we do have Servers containing code that I see no way to convert into Lambda Functions.
We are migrating to AWS and among other changes, we are trying to replace the use of Redis with a managed pub/sub solution. The required solution is fairly simple: a managed broker that allows to publish a message from one node and to subscribe for its reception from 0 or more other nodes.
It seems impossible to achieve this with any of the available solutions:
Kinesis - It is a streaming solution for data ingestion (similar to Apache Pulsar)
SNS - From the documentation, it looks like exactly what we need until we realize that there is no solution to connect a server (not a Lambda) unless with a custom HTTP endpoint.
EventBridge - Same issue as with SNS
SQS - It is a queue, not a pub/sub.
Amazon MQ / Rabbit MQ - It is a queue, not a pub/sub. But also is not a SaaS solution but rather an installation to an owned node.
I see no reason to remove a feature such as subscribing from a server, this is why I was sure it will be present in one or more of the available solutions. But we went through the documentation and attempted to consume fro SNS and EventBridge without success. Are we missing something? How to achieve what we need?
Example
Assume we have an API server layer, deployed on ECS with a load balancer in front. The API has 2 endpoints, a PUT to update a document, and an SSE to listen for updates on documents.
Assuming a simple round-robin load balancer, an update for document1 may occur on node1 where a client may have an ongoing SSE request for the same document on node2. This can be done with a Redis backbone; node1 publishes on document1 topic and node2 is subscribed to the same topic. This solution is fast and efficient (in this case at-most-once delivery is perfectly acceptable).
Being this an example we will not consider WebSocket pub/sub API or other ready-made solutions for this specific use case.
Lambda
Subscriber side can not be a Lambda. Being two distinct contexts involved (the SSE HTTP Request one and the SNS event one) this will cause two distinct lambdas to fire and no way to 'stitch' them together.
SNS + SQS
We hesitate with SQS in conjunction with SNS being a solution that will add a lot of unneeded complexity:
Number of nodes is not known in advance and they scale, requiring an automated system to increase/reduce the number of SQS queues.
Persistence is not required
Additional latency is introduced
Additional infrastructure cost
HTTP Endpoint
This is the closest thing to a programmatic subscription but suffers from similar issues to the SNS-SQS solution:
Number of nodes is unknown requiring endpoint subscriptions to be automatically added.
Eiter we expose one endpoint for each node or have a particular configuration on the Load Balancer to route the message to the appropriate node.
Additional API endpoints must be exposed, maintained, and secured.

Use serverless here-api in lambda to extract traffic flow

I deployed a traffic service on aws that uses a lambda function, but I have a hard time understanding how it works. For instance, I see that the lambda function contains .js files for traffic, but I don't understand what part I can edit to create proper requests for traffic flow based on geo-coordinates.
I found an article about geocoding app, but it felt a bit different from what I am trying to achieve. Which is to create a function just to retrieve the traffic flow information.
After you deploy the traffic application in lambda, you can retrieve the traffic info via the Lambda API gateway.
https://.execute-api..amazonaws.com/Prod/traffic/api/traffic/
https://.execute-api..amazonaws.com/Prod/traffic/api/traffic.maps/
More details can be found in the below two online doc:
https://github.com/heremaps/here-aws-sar/tree/master/serverlessFunctions/traffic
https://developer.here.com/documentation/traffic/dev_guide/topics_v6.1/example-flow-intro.html
If you are looking for the rendered live-traffic map tiles, I would suggest using map tile application: https://github.com/heremaps/here-aws-sar/tree/master/serverlessFunctions/maptile

How should microservices developed using AWS API Gateway + Lambda/ECS talk?

I am developing a "micro-services" application using AWS API Gateway with either Lambda or ECS for compute. The issue now is communication between services are via API calls through the API gateway. This feels inefficient and less secure than it can be. Is there a way to make my microservices talk to each other in a more performant and secure manner? Like somehow talk directly within the private network?
One way I thought of is multiple levels of API gateway.
1 public API gateway
1 private API gateway per microservice. And each microservice can call another microservice "directly" inside the private network
But in this way, I need to "duplicate" my routes in 2 levels of API ... this does not seem ideal. I was thinking maybe use {proxy+}. So anything /payment/{proxy+} goes to payment API gateway and so on - theres still 2 levels of API gateway ... but this seem to be the best I can go?
Maybe there is a better way?
There are going to be many ways to build micro-services. I would start by familiarizing yourself with the whitepaper AWS published: Microservices on AWS, Whitepaper - PDF version.
In your question you stated: "The issue now is communication between services are via API calls through the API gateway. This feels inefficient and less secure than it can be. Is there a way to make my microservices talk to each other in a more performant and secure manner?"
Yes - In fact, the AWS Whitepaper, and API Gateway FAQ reference the API Gateway as a "front door" to your application. The intent of API Gateway is to be used for external services communicating to your AWS services.. not AWS services communicating with each other.
There are several ways AWS resources can communicate with each other to call micro-services. A few are outlined in the whitepaper, and this is another resource I have used: Better Together: Amazon ECS and AWS Lambda. The services you use will be based on the requirements you have.
By breaking monolithic applications into small microservices, the communication overhead increases because microservices have to talk to each other. In many implementations, REST over HTTP is used as a communication protocol. It is a light-weight protocol, but high volumes can cause issues. In some cases, it might make sense to think about consolidating services that send a lot of messages back and forth. If you find yourself in a situation where you consolidate more and more of your services just to reduce chattiness, you should review your problem domains and your domain model.
To my understanding, the root of your problem is routing of requests to micro-services. To maintain the "Characteristics of Microservices" you should choose a single solution to manage routing.
API Gateway
You mentioned using API Gateway as a routing solution. API Gateway can be used for routing... however, if you choose to use API Gateway for routing, you should define your routes explicitly in one level. Why?
Using {proxy+} increases attack surface because it requires routing to be properly handled in another micro-service.
One of the advantages of defining routes in API Gateway is that your API is self documenting. If you have multiple API gateways it will become colluded.
The downside of this is that it will take time, and you may have to change existing API's that have already been defined. But, you may already be making changes to existing code base to follow micro-services best practices.
Lambda or other compute resource
Despite the reasons listed above to use API Gateway for routing, if configured properly another resource can properly handle routing. You can have API Gateway proxy to a Lambda function that has all micro-service routes defined or another resource within your VPC with routes defined.
Result
What you do depends on your requirements and time. If you already have an API defined somewhere and simply want API Gateway to be used to throttle, monitor, secure, and log requests, then you will have API Gateway as a proxy. If you want to fully benefit from API Gateway, explicitly define each route within it. Both approaches can follow micro-service best practices, however, it is my opinion that defining each public API in API Gateway is the best way to align with micro-service architecture. The other answers also do a great job explaining the trade-offs with each approach.
I'm going to assume Lambdas for the solution but they could just as well be ECS instances or ELB's.
Current problem
One important concept to understand about lambdas before jumping into the solution is the decoupling of your application code and an event_source.
An event source is a different way to invoke your application code. You mentioned API Gateway, that is only one method of invoking your lambda (an HTTP REQUEST). Other interesting event sources relevant for your solution are:
Api Gateway (As noticed, not effective for inter service communication)
Direct invocation (via AWS Sdk, can be sync or async)
SNS (pub/sub, eventbus)
There are over 20+ different ways of invoking a lambda. documentation
Use case #1 Sync
So, if your HTTP_RESPONSE depends on one lambda calling another and on that 2nd lambdas result. A direct invoke might be a good enough solution to use, this way you can invoke the lambda in a synchronous way. It also means, that lambda should be subscribed to an API Gateway as an event source and have code to normalize the 2 different types of events. (This is why lambda documentation usually has event as one of the parameters)
Use case #2 Async
If your HTTP response doesn't depend on the other micro services (lambdas) execution. I would highly recommend SNS for this use case, as your original lambda publishes a single event and you can have more than 1 lambda subscribed to that event execute in parallel.
More complicated use cases
For more complicated use cases:
Batch processing, fan-out pattern example #1 example #2
Concurrent execution (one lambda calls next, calls next ...etc) AWS Step functions
There are multiple ways and approaches for doing this besides being bound to your current setup and infrastructure without excluding the flexibility to implement/modify the existing code base.
When trying to communicate between services behind the API Gateway is something that needs to be carefully implemented to avoid loops, exposing your data or even worst, blocking your self, see the "generic" image to get a better understanding:
While using HTTP for communicating between the services it is often common to see traffic going out the current infrastructure and then going back through the same API Gateway, something that could be avoided by just going directly the other service in place instead.
In the previous image for example, when service B needs to communicate with service A it is advisable to do it via the internal (ELB) endpoint instead of going out and going back through the API gateway.
Another approach is to use "only" HTTP in the API Gateway and use other protocols to communicate within your services, for example, gRPC. (not the best alternative in some cases since depends on your architecture and flexibility to modify/adapt existing code)
There are cases in where your infrastructure is more complex and you may not communicate on demand within your containers or the endpoints are just unreachable, in this cases, you could try to implement an event-driven architecture (SQS and AWS Lambda)
I like going asynchronous by using events/queues when possible, from my perspective "scales" better and must of the services become just consumers/workers besides no need to listen for incoming request (no HTTP needed), here is an article, explaining how to use rabbitmq for this purpose communicating microservices within docker
These are just some ideas that hope could help you to find your own "best" way since is something that varies too much and every scenario is unique.
I don't think your question is strictly related to AWS but more like a general way of communication between the services.
API Gateway is used as an edge service which is a service at your backend boundary and accessible by external parties. For communication behind the API Gateway, between your microservices, you don't necessary have to go through the API Gateway again.
There are 2 ways of communication which I'd mention for your case:
HTTP
Messaging
HTTP is the most simplistic way of communication as it's naturally easier to understand and there are tons of libraries which makes it easy to use.
Despite the fact of the advantages, there are a couple of things to look out for.
Failure handling
Circuit breaking in case a service is unavailable to respond
Consistency
Retries
Using service discovery (e.g. Eureka) to make the system more flexible when calling another service
On the messaging side, you have to deal with asynchronous processing, infrastructure problems like setting up the message broker and maintaining it, it's not as easy to use as pure HTTP, but you can solve consistency problems with just being eventually consistent.
Overall, there are tons of things which you have to consider and everything is about trade-offs. If you are just starting with microservices, I think it's best to start with using HTTP for communication and then slowly going to the messaging alternative.
For example in the Java + Spring Cloud Netflix world, you can have Eureka with Feign and with that it's really easy to use logical address to the services which is translated by Eureka to actual IP and ports. Also, if you wanna use Swagger for your REST APIs, you can even generate Feign client stubs from it.
I've had the same question on my mind for a while now and still cannot find a good generic solutions... For what it's worth...
If the communication is one way and the "caller" does not need to wait for a result, I find Kinesis streams very powerful - just post a "task" onto the stream and have the stream trigger a lambda to process it. But obviously, this works in very limited cases...
For the response-reply world, I call the API Gateway endpoints just like an end user would (with the added overhead of marshaling and unmarshaling data to "fit" in the HTTP world, and unnecessary multiple authentications).
In rare cases, I may have a single backend lambda function which gets invoked by both the Gateway API lambda and other microservices directly. This adds an extra "hop" for "end users" (instead of [UI -> Gateway API -> GatewayAPI lambda], now I have [UI -> Gateway API -> GatewayAPI lambda -> Backend lambda]), but makes microservice originated calls faster (since the call and all associated data no longer need to be "tunneled" through an HTTP request). Plus, this makes the architecture more complicated (I no longer have a single official API, but now have a "back channel" direct dependencies).

AWS Server-less Architecture using Lambda and SQS

I've been learning more and more about AWS lately. I've been reading through the white papers and working my way through the various services. I've been working on PHP applications and front-end dev for a while now. Two things really stuck out to me. Those two things are server-less architecture using Lambdas with event-triggers and SQS (queues). The last three years I have been working with REST over HTTP with frameworks like Angular.
It occurred to me though that one could create an entire back-end/service layer through Lambda's and message queues alone. Perhaps I'm naive as I have never used that type of architecture for a real world project but it seems like a very simple means to build a service layer.
Has anyone built a web application back-end consisting of only Lambdas and message queues as opposed to "traditional" http request with REST. If so what types of drawbacks are there to this type of architecture besides relying so heavily on a vendor like AWS?
For example, wouldn't it be entirely possible to build a CMS using these technologies where the scripts create the AWS assets programmatically given a key with full admin rights to an account?
Yes, you can practically create the entire backend service using serverless architecture.
There are a lot of AWS services that usually play into the serverless gambit of things.
DynamoDB, SNS, SQS, S3 to name a few.
AWS Lambda is the backbone and sort of acts as a glue to bind these services.
Serverless doesn't mean you move away fromĀ "traditional" http request to message queues. If you need the web interface you would still need to use HTTP. You would primarily use message queues to decouple your services.
So, if you want the service to be accessible over HTTP just like your REST services and still be serverless then you can do that as well. And for that you will need to use AWS API Gateway in conjunction with AWS Lambda
One primary drawback/limitation is that debugging is not very straightforward. You cannot login to the system and cannot attach remote debuggers. And then obviously you get tied into the vendor.
Then there are limitations on the resources. E.g. Lambda can offer you a maximum memory footprint of 5GB, so if you need to do some compute intensive job that needs more memory and can't be broken down into sub tasks then serverless (AWS Lambda) is not an option for you.