High reliability of a web service - web-services

I want to make a web service that is highly reliable to a third-party web service. The high reliability here means any single request from the third-party web service will be successfully processed by my web service. The third-party service doesn't have a retry mechanism on a failed request nor can it change its request format (a http POST with fields and values in the body).
I am not considering a fail-over solution such as multiple nodes behind a load balancer meets the requirements because one node may fail and load balancer may still route a request to it before its removed from the pool.
I am considering using something like Amazon SQS that receives the request from third-party request and passes it onto my web service because SQS has a retry mechanism. However, the difficulties here is SQS seems requires the content to be filled in the "Message" parameter and this cannot be achieved by the third party service.
Could there be a solution?

The only real solution to ensure completion of a request that I can think of is to add a wrapper to the third-party web service so that it DOES have a retry mechanism. Any other solution including HA will have a point of failure. Consider that if the Amazon SQS service fails, you will still have a third party web service that will request information from the down service and fail.

I am not considering a fail-over solution such as multiple nodes
behind a load balancer meets the requirements because one node may
fail and load balancer may still route a request to it before its
removed from the pool.
What do you think the architecture of SQS looks like. They have front ends behind load balancers. If a front end fails then those requests will fail. SQS is still high availability because if you retry the request, it will likely succeed. To avoid showing errors to users, you should build in retries on the client side.

Related

Is it possible to return the response from a lambda when using Api gateway with eventbridge?

I have created a micro service architecture which flows as follows:
Api call -> Api gateway -> Eventbridge -> SNS -> Lambda
The reason for this is to use SNS instead of SQS to decouple applications for true serverless compute without the need for lambda to continuously poll sqs, pub sub over push poll.
The trouble is that although the execution is fine and the lambdas run as expected the return received by the user or app is the eventbridge response. I can’t find any docs on how eventbridge handles responses for http requests through API gateway.
Does anyone have any ideas or docs to push me in the right direction.
Thanks!
In your setup it's not possible to have the Lambda response proxied back to the api request initiator, as your client is very much decoupled of the actual request processing.
Almost identical issue was experienced here
You need to rethink the process as a whole:
what operation you want to complete via the API request?
does the processing of the request really need to be asynchronous (= does it take long time to complete?)
can you handle the request with a Lambda function, delegate to sns from there and finally generate desired response back to the client?
So as it turns out the answer is yes and no for anyone coming across this in the future.
With the current setup another database is required and the responses can be inserted into it with a transaction ID. This transaction ID can be generated by the client during the request so a subsequent call to find the response in the table can be made.
Alternatively Websocket or GraphQL api’s or would allow for asynchronous invocation if really depends on your use case and accepted complexity.
Thanks for everyone’s inputs!

How to secure communication between Pact Broker, Consumer and Provider

We are planning to implement CDC in our project and Pact is being considered as primary candidate. Currently I am working on a POC to set up end to end flow with CI/CD integration with GitLab. I have couple of questions related to Authentication/Authorization/security.
Consumer - Pact Broker: Consumers here are external partners. I see client side certificates as an option. I am not able to find much documentation or info on Web for the options available. Pact broker will be hosted in AWS. Can we place this behind a gateway?
Pact Broker and Provider: Both components are part of our infrastructure. In this case I understand that we will be generating a GitLab trigger token which will be passed as part of future requests to Provider pipeline. We will be using same token every time.
Could you please advise options available in both cases to make the communication more secure.
Thanks in advance.
We are planning to implement CDC in our project and Pact is being considered as primary candidate.
Good choice! :)
I have couple of questions related to Authentication/Authorization/security
The OSS broker doesn’t have any security controls other than basic auth and read-only/read-write access permissions (which isn’t very appropriate for external use for obvious reasons). There is basic support for redacting credentials in the UI, but you can still get them through API calls (even for read-only accounts).
Consumer - Pact Broker: Consumers here are external partners. I see client side certificates as an option. I am not able to find much documentation or info on Web for the options available. Pact broker will be hosted in AWS. Can we place this behind a gateway?
Where did you see that client certificates were supported? I’m sorry to say that is incorrect.
You can definitely put it behind a gateway/reverse proxy type thing: https://docs.pact.io/pact_broker/configuration/#running-the-broker-behind-a-reverse-proxy
You would need to add your own authentication layer for this purpose, so using a an API gateway for this that might be a good starting point.
Pact Broker and Provider: Both components are part of our infrastructure. In this case I understand that we will be generating a GitLab trigger token which will be passed as part of future requests to Provider pipeline. We will be using same token every time.
The provider side authentication is the same as consumer.
Alternatively, we have created Pactflow, which is a commercial version of the OSS Broker designed for enterprise use which has a full security model wrapped over the OSS broker including API tokens, and secrets, teams management and other useful features (see https://pactflow.io/features/ for more). We are also almost ready release CI users and fine-grained permissions management.

Can Server sent events (sse) work with AWS Cloudfront?

Is there a way to make sse (server sent events) work using Cloudfront?
I know they announced websockets support few years ago but I can not find any reference or cases related to using sse communication through Cloudfront.
I did a test and the client response ends with 504 Gateway Time-out after a minute approx.
Yes, you can use SSE (Server-Sent Events) with CloudFront.
There are many different ways to implement your API behind CloudFront. So, in some cases, there could be limitations. But let me describe one standard and straightforward way you could set up your application that is tested to work with SSE.
Let's say you have an EC2 instance (at least one) that is behind an ALB (Application Load Balancer). Even if you don't need more than one EC2 instance, you might need an ALB in order to use HTTPS. Even though you will need to import your TLS/SSL certificate into your CloudFront Distribution, you will also need your API to be accessible (by CloudFront itself) via HTTPS (don't forget it could be located in another continent).
In CloudFront you can create a Distribution with an Origin that basically maps https://yourapp.com/api to that ALB. Note that CloudFront also allows you to forward traffic to a different (sub)domain if that's where your API/ALB is (that setup I've also tested successfully).
Websockets works with AWS API Gateway. You can use AppSync (GraphQL) Subscriptions also. CloudFront can’t send anything himself.
AWS resources are linked with event bridge (Basically async way to trigger an event) and its stateless so it is not possible. The only way is you have to deploy your app in some sort of web container using which you can achieve your expected behaviour.
Another way is you can use AWS API Gateways's websocket open the connection (Full duplex) and back and forth transafer what ever data you want.

how do i set up a HTTP test for a Route53 -> EC2 -> API endpoint reverse proxy pathway

I have built an EC2 reverse proxy (Nginx) that communicates with an external API endpoint over the internet. I have a Route53 DNS with an A record linking to my EC2. There are a few endpoints (Nginx locations) and depending on which url you hit, you are redirected to a specific proxy location, and forwarded to the right endpoint on the external API. It all works great.
Now i want to create some type of job that will test this process periodically to ensure that it's running and notify me if it's not. AWS has so many tools and i think i need to use Lambda and API Gateway.
I'd like to hit my url(Route53 DNS) go thru the EC2 and receive a response from the endpoint server. My site does this, postman can, but i can't figure out how to accomplish this in an automated way and alert me based on the response values.
how can i test my full pathway (www.example.com/option -> nginxEC2 path('/option') -> www.endpoint.com/option) and be notified based on the results.
EDIT: i need to be able to send a body with this. if i send it without body the server returns 404, if i can send with a body/payload, i'll get a response.
EDIT: basically looking for a way to hit my DNS, which thru A record, routes to my reverse proxy, to an endpoint. i just need to do an HTTP request to the Domain, and get and answer back and know the status code.
Mark B's solution is the closest as the free site he sent me has an option to pay for this service. gonna leave it open a few more days.
You definitely don't need API Gateway for this. That wouldn't help you test this at all. API Gateway would just give you an entirely new API that you would need to test.
You could use Lambda for this as you mentioned. You would write a Lambda function that hits the URLs you want to test, checks the results, and sends you a message over SES or SNS or some other means when it fails. The Lambda function could be configured to automatically run on a schedule.
However, AWS already has a service that does exactly what you are looking for: Route53 Health Checks.
What you are describing is called an HTTP health check or HTTP uptime monitor. There are tons of services that provide this feature, some of them free.
It looks like the word that you're looking for is trace -- you want to trace requests along your application. AWS offer for that is X-Ray. As you see in their official documentation, you need to use their SDK to instrument your application, which talks to a deamon in your EC2 instance. You can then integrate with CloudWatch and SNS to be notified upon errors (e.g. 4xx codes): https://aws.amazon.com/blogs/devops/using-amazon-cloudwatch-and-amazon-sns-to-notify-when-aws-x-ray-detects-elevated-levels-of-latency-errors-and-faults-in-your-application/
Hope it helps!

Building a web application using WebSockets and AWS

I'm trying to create a collaborative web application where multiple users can work together on various (shared) projects. So far I have a JavaScript client and one local jWebSocket server.
To remain scalable upon deployment, I thought of two options:
Option 1
I can use AWS IoT instead of multiple jWebSocket servers. Publishing changes of a project is easy, I would just need to publish to e.g. /project/{project-id}. But how would the traditional request-response mechanism work?
The Problem: EC2 instances handling requests would be reachable by publishing to distinct topics (e.g. /server/1). But when the JS client connects to AWS IoT, it does not know of any EC2 instance to send requests to. How could I assign each client to an instance/topic?
Option 2
Run jWebSocket servers on multiple EC2 instances behind an AWS Application Load Balancer. The balancer would simply assign each client to a server and the traditional request-response flow would not be a problem. But what about pushing changes?
The Problem: Because each server has its own set of connected clients, it can not push changes to clients connected to another server.
Remarks
Mixing jWebSocket to send requests to and AWS IoT to receive events from seems like a sloppy solution.
I assume I can programmatically adapt the IoT policies per cognito identity to allow/deny the subscription to specific projects.
Using AWS Lambda and relinquishing servers altogether is not an option due to the high latency introduced by Lambda (if you've made different experiences, please share).
Related posts
IoT request response protocol
Thanks for any thought you could give me on this issue.
I've got it. The first suggestion in this question pointed me into the right direction. The solution allows all clients to maintain a direct WebSocket connection to the server they originally connected to, without subscribing to specific topics.
It works as follows:
When a client connects to a server, the server subscribes to the client's channel
If a server needs to send a message to a client that is not connected, it publishes that message to the client's channel
(you guessed it) The server that is subscribed to the channel can then process the message on the first server's behalf
"Pusher" in the diagram describes this SaaS, but can of course be replaced by any other messaging service.