I have a FastAPI working on an EC2 under REST api with some endpoints. Now I would like to use it under a websocket api. To do this I have a few questions:
On my FastAPI, what shoud I do? I have read they introduce mangnum and a handler main for magnum, but they always have only one endpoint and I have several endpoints.
Also, they always use a lambda function. Could I use an EC2?
How do $connection and $disconnection work in this case? What do I have to introduce in my fastapi so I can still use my endpoints?
Processes inside my fastapi can take long to answer (eg. 20s). Therefore, I need to move to websocket to avoid timeouts. If you think I can have a better solution in a different way I'll happy to know about it.
I have spent weeks on this same issue. I tried countless configurations and in the end I ended up not using FastAPI. I replaced my #app.websocket with a basic lambda_handler(event, context) function so that I could use the the connect, disconnect and sendmessage handlers as per AWS documentation. To the day I still can't figure out how to do it with FastAPI and it bugs me. Have you found a solution to this?
This helped - https://docs.aws.amazon.com/code-samples/latest/catalog/code-catalog-python-cross_service-apigateway_websocket_chat.html
Editing this to add that I am using python.
Related
I would like to implement a queuing mechanism for sending out email via PHPMailer on Amazon EC2. I have set up Beanstalkd correctly on the server and can access it via a console. The mail doesn't seem to go through (trying the various combinations of sample code). In addition do I need to set up a cron job also that would call one of the producer or consumer files?
Does anyone have working code for sending out email via phpmailer/pheanstalk please for Amazon EC2?
Thanks.
Beanstalkd is great, and I use it myself, however, don't use it for this; It's reinventing the wheel in a bad way. Instead, install a local mail server such as postfix and get that to do your queuing for you. This is also much, much simpler, faster, and easier to control. Email servers are built for managing queues, and they are extremely good at it.
Before you do so, get your mail sending script working – there's no point in even attempting to get something more complex working until you've done that. Also be aware that sending email from EC2 is difficult – Amazon wants you to use their SES service rather than sending directly – you may find sending is blocked altogether. Read the PHPMailer troubleshooting guide to see how to diagnose that.
So I've built this API. It consists of a Lambda function (accessible via API Gateway) which talks to a Neptune graph database instance via websockets.
Everything is wired up and working. But I recently started noticing intermittent 500's coming from the API. After some investigation I found that the Neptune Gremlin server was dropping/refusing connections whenever multiple requests would come in close together.
I found this page which suggests that the ephemeral nature of serverless doesn't play nice with websockets, so the websocket connection should be closed manually after each request. But after implementing that I found no difference – still 500's.
The page also suggests that when using Gremlin on Neptune you should probably send HTTP requests to Neptune rather than using websockets,
Alternatively, if you are using Gremlin, consider submitting requests to the Gremlin HTTP REST endpoint rather than the WebSockets endpoint, thereby avoiding the need to create and manage the lifetime of a connection pool.
The downside to this approach is that we would then have to use string-based queries (which means re-writing a large portion of the project). Another downside is that the Gremlin HTTP endpoint returns pretty unstructured data.
So what I'm wondering is whether anyone has got Lambda reliably talking to Neptune over websockets? If so, how?
Edit:
Since I'm using the AWS Chalice framework I don't think I really have direct access to the handler function. Below is what my lambda looks like.
And here is the code that connect() is calling:
import os
from gremlin_python.structure.graph import Graph
from gremlin_python.driver.driver_remote_connection import DriverRemoteConnection
def connect():
conn_string = os.environ.get('GRAPH_DB')
global g
g = Graph().traversal().withRemote(DriverRemoteConnection(conn_string, 'g'))
So when the app starts (when a lambda instance is spun up), that connect function is called and the app gets a connection to Neptune. From there the app passes around that global g variable so as to use the same connection instance for that invocation. I was then calling close() on the DriverRemoteConnection object before returning the results of a request (and that's where I found I was still getting 500's).
Yes, it is possible to use WebSockets within a Lambda function to communicate with Neptune. There are different nuances for doing this depending on the programming language that you're using. Ultimately, it comes down instantiating the client connection and closing the connection within the handler() of the Lambda function.
If using Java [1], you can create the cluster object outside of the handler so that it can be reused per each Lambda invocation. But the client that is configured from that cluster object must be instantiated and closed during each invocation.
Do you have a snippet of code that you're using that you could share for review?
[1] https://docs.aws.amazon.com/neptune/latest/userguide/best-practices-gremlin-java-close-connections.html
I have multiple applications written with nodejs or python/django or ...
These services are working fine. But need to have pub/sub Async communication with each other.
In nodejs there is no problem and easily can pub/sub to any redis channel.
Question: My question is how can i continuously subscribe to a redis channel and receive data published with other services?
Note: many links suggest to use django-channels. But I guess that's not the way to do it. If so can any one help me and give details on how to do it.
Update:
Django by default is not event-based like nodejs. So if i use a redis client, I should for example check redis every second and see if anything is published or not. I don't think using just a redis client in python will be enough.
Really appreciate it.
There are a lot of alternatives. If you have FIFO issue you have to use queues in order to connect one microservice to another. For me, if you don’t have Big Data problem you can use RabbitMQ, It is very practical and very effective, otherwise if you have Big Data problem you can use Kafka. There are wide variety services.
If you want just Pub/Sub. The best tool is Redis, It is very fast and easy to integrate. If you are concerned how to implement it in Python just look at article
[Update]
It's possible to create a manage.py command in django and subscribe to redis in that management file and execute this script separated from django server:
class Command(BaseCommand):
def handle(self, *args, **options):
r = redis.StrictRedis(host='localhost', port=6379, db=1)
p = r.pubsub()
p.psubscribe('topic.*')
for message in p.listen():
if message:
print('message received, do anything you want with it.')
In order to handle subscriptions to redis, you will need to have a separate continuously running process (server) which listens to redis and then makes something with your data. django-channels will do the same by running the code in a worker
As pointed above, Django provides convenient way to run "servers" by using Django management command approach. When running a django management command, you have complete access to your code, i.e. to ORM as well.
Only detail that you mentioned Async communication. Here you need to take into account that Django's ORM is strictly sync code, and you need to pay attention how you want to use ORM with async code. Probably you need to clarify what do you mean by async here.
As for redis messages processing, you can use any libraries that work with it. For example, aioredis or redis-py
I'm am new to server technology and dont really understand how they work (hope you could shed some light on this as well).
But basically my problem is I have a Firebase Database which i need to update every 20 seconds the whole day. This is the way I think i should solve the problem. I need to send a HTTP POST request to the firebase database every 20 sec. This means I need to have a server where I run a piece of code sending the HTTP request every 20s. Im not sure if this is the right way to do it, and even if it is how to implement it.
Some questions i have are
I definitely need to create a server for this right? and if so what platform is recommended to write my server code? (preferable free platforms)
I have tried reading up on the platforms available such as AWS, Google Cloud but dont really get the terminology used. Are there any tutorials for this available?
I am really lost, and have been stuck on this for some time, any help is deeply appreciated.
This is achievable by leveraging Cloud Watch Events specifically using a Rate Expression that invokes a SNS topic which can then hit your HTTP endpoint.
Hope that helps!
I would suggest that you try and keep everything within Firebase. Create a firebase cloud function that sends the HTTP request for the update, and use Firebase functions-cron that is a cron like scheduler to schedule.
Background:
I've a local application that process the user input for 3 second (approximately) and then return an answer (output) to the user.
(I don't want to go into details about my application in purpose of not complicate the question and keep it a pure architectural question)
My Goal:
I want to make my application a service in the cloud and expose API
(for the upcoming website and for clients that will connect the service without install the software locally)
Possible Solutions:
Deploy WCF on the cloud and use my application there, so clients can invoke the service and use my application on the cloud. (RPC style)
Use a Web-API that will insert the request into queue and then a worker role will dequeue requests and post the results to a DB, so the client will send one request for creating a request in the queue, and another request for getting the result (which the Web-API will get from the DB).
The Problems:
If I go with the WCF solution (#1) I cant handle great loads of requests, maybe 10-20 simultaneously.
If I go with the WebAPI-Queue-WorkerRole solution (#2) sometimes the client will need to request the results multiple times its can be a problem.
If I go with the WebAPI-Queue-WorkerRole solution (#2) the process isn't sync, the client will not get the result once the process of his request is done, he need to request the result.
Questions:
In the WebAPI-Queue-WorkerRole solution (#2), can I somehow alert the client once his request has processed and done ? so I can save the client multiple request (for the result).
Asking multiple times for the result isn't old stuff ? I remmemeber that 10 - 15 years ago its was accepted but now ? I know that VirusTotal API use this kind of design.
There is a better solution ? one that will handle great loads and will be sync or async (returning result to the client once it done) ?
Thank you.
If you're using Azure, why not simply fire up more servers and use load balancing to handle more load? In that way, as your load increases, you have more servers to handle the requests.
Microsoft recently made available the Azure Service Fabric, which gives you a lot of control over spinning up and shutting down these services.