Stitching Cloud Run microservices into single API gateway - google-cloud-platform

I have my services split into Cloud Run services, say e.g. in a blog application I have user, post and comment services.
For each service I get a separate http endpoint on deploy, but what I want is to have api.mydomain.com act as a gateway for accessing all of them via their respective routes (/user*, /post*, etc).
Is there a standard (i.e. GCP-managed and serverless-ey) way to do this?
Things I've tried/thought of and their issues:
Firebase hosting with rewrites - this is the 'suggested' solution, but it's not very flexible and more problematically I think this leads to double wrapping CDNs on every request. Correct me if wrong, but Cloud Run endpoints use a CDN already, then you have Firebase hosting running through fastly. Seems silly to be needlessly adding cost and latency like that.
nginx on a constantly running instance - works ok but not managed and not serverless; requires scaling interventions
nginx on Cloud Run - this seems like it would have highly variable performance since there are (a) two possible cold starts, and (b) again double wrapping CDN.
using Cloud LB/CDN directly - seemingly not supported with Cloud Run
Any ideas? For me this kind of makes Cloud Run unusable for microservices. Hopefully there's a way around it.

Related

How to deploy nuxt frontend with express backend on AWS?

I have a Nuxt v2 SSR application as the frontend running on port 3000
I have an express API as the backend running on port 8000
I have a python script that loads data from external APIs and needs to run continuously
Currently all of them are separate projects with their own package.json and what not
How do I deploy this to AWS?
The only thing I have figured out so far is that I may have to deploy express API as an Elastic Beanstalk application.
Should I have a separate docker-compose file for each because they are separate projects currently or should I merge them into one project with a single docker-compose file
I saw similar questions asked about React, could really appreciate some direction here in Nuxt
None of these similar questions are based on Nuxt
How to deploy separated frontend and backend?
How to deploy a React + NodeJS Express application to AWS?
How to deploy backend and frontend projects if they are separate?
There are a couple of approaches depending on your workload and budget. Let's cover what the options are and which ones apply to the work at hand.
Serverless Approach with Server-side rendering (SSR)
Tutorial
By creating an API Gateway route that invokes NuxtRenderer in a Lambda. The resulting generated HTML/JS/CSS would be pushed to S3 bucket. The S3 bucket acts as a Cloudfront origin. Use the CDN to cache the least updated items and use the API call to update the cache when needed. This is the cheapest way to deploy, but if you don't have traffic some customers may experience brief lag when cache updates hit. Calculator
Serverless Approach via static generation
This is the easiest way to get going. All the routes that do not have dynamic template generation can simply live in an S3 bucket. The simplest approach is to run nuxt generate and then upload the resulting contents of dist to the bucket. No node backend here, which is not the requirement in the question but its worth mentioning.
Well documented on NuxtJS.org and free tier.
Serverless w/ Elastic Beanstalk
IMO this approach is unnecessary and slightly dated for the offering AWS provides in 2022. It still works of course, but the cost to benefits aren't attractive.
To do this you need to use the eb command line tool and set NPM_CONFIG_UNSAFE_PERM=true. AFAIK there is nothing else special that needs to happen for eb to know what to do from there. Calculator
Server-ish Approach(es)
Another alternative is to use Lightsail and NodeJS server. Lightsail offers low (but not lower than serverless) cost to a full time NodeJS server. In this case you would want to clone your project to the server then setup systemd script to keep nodejs running.
Yet another way to do this and have some scalability is to use ECS and Docker. In this case you would create a Dockerfile that builds the containers, executes npm start, and exposes the port Nuxt runs on to the host. This example shows how to run it using Fargate, which is essentially a serverless version of EC2 machine. Calculator
There are some ways for you to deploy your stack in AWS. I can give you some options, but you're best shot if you want to save some costs is by using Lambda Functions as your backend, S3 for your front-end and a batch Lambda job for your python script.
For your backend - https://github.com/vendia/serverless-express
For your nuxt frontend - https://nuxtjs.org/deployments/amazon-web-services
For your python job - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
It's not way too simple to execute all of those, but with the following links you'll probably have an idea of how you may implement your solution.

NGINX - AWS - LoadBalancer

i have to make a web application which a maximum of 10,000 concurrent users for 1h. The web server is NGINX.
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
can you suggest a correct deployment on AWS?
LoadBalancer on 2 or more EC2?
If so, which EC2 sizing do you recommend? Better to use Autoscaling?
thanks
thanks for your answer. The application is 2 page PHP and the impact is minimal because in PHP code i write only 2 functions that checks user/password and token.
the video is provided by Wowza CDN because is live streaming, not on-demand.
what tool or service do you suggest about the stress test of Web Server?
I have to make a web application which a maximum of 10,000 concurrent users for 1h.
Avg 3/s, it is not so bad. Sizing is a complex topic and without more details, constraints, testing, etc. You cannot get a reasonable answer. There are many options and without more information it is not possible to say which one is the best. You just started NGINX, but not what it's doing (static sites, PHP, CGI, proxy to something else, etc.)
The application is a simple landing page with an HTML5 player with streaming video from CDN WOWZA.
I will just lay down a few common options:
Let's assume it is a single static (another assumption) web page referring an external resource (video). Then the simplest and the most scalable solution would be an S3 bucket hosting behind the CloudFront (CDN).
If you need some simple quick logic, maybe a lambda behind a load balancer could be good enough.
And you can of course host your solution on full compute (ec2, beanstalk, ecs, fargate, etc.) with different scaling options. But you will have to test out what is your feasible scaling parameters or bottleneck (io, network CPU, etc.). Please note that different instance types may have different network and storage throughput. AWS gives you an opportunity to test and find out what is good enough.

Performance question - Lambda based API - hosted directly on AWS vs. through Netlify Functions

Hey guys so I'm developing a corona api for a hackathon and we currently have a "classic" setup running on an EC2 VM with the data simply in .json files on disk.
For the default route, /daily, it takes approximately 400-600ms.
Now I've been messing around with redesigning it with serverless functions which definitely works, however performance-wise I'm severely underwhelmed.
I did it once in AWS directly, with their API Gateway and Lambda functions with the data in DynamoDB and a redis cache (redis cloud).
So when the request comes in, it first checks redis, and if the key can't be found, it queries dynamo and returns the result.
This works great, with response times (after the first one where the lambda has a cold-start / the redis entry may not exist) a bit lower around 200-400ms.
Howeverrr, setting up these individual Lambda functions with all the IAM configuration required with AWS was a pain in the butt.
So as I'm a big fan of Netlify as well, I decided to try it through their functions ( I know they're basically just lambdas as well ). Anyway, I have it using the same redis cache and instead of dynamo (because I couldn't figure out the IAM to let netlify read our dynamodb) I used faunadb as the db which works well enough. The problem is, even when the redis entry exists, the requests through netlify functions take AT LEAST 1s, often around 1.3s-1.5s
Any idea why the large discrepancy?
The AWS Lambda version is hosted in eu-central-1 and so is the redis cloud instance. I'm not sure where netlify setup our functions. Thats the only way I can explain the difference, when both are simply hitting the redis cache and returning the result. Maybe the netlify function is in us-east-1 or somewhere further away where the check to redis / get from redis takes much longer.
EDIT 1:
Screenshot from monitoring software:
EDIT 2:
So I tested a netlify function route where it just returns on object of some random content, without making any other requests, and it was just as fast as the AWS one. So it seems my hypothesis about the request from netlify's aws region, wherever they put my function, is taking that much longer to query redis than the lambda version where I could tell it specifically which region to execute from..
Any other ideas / tips?

Can cloud functions like AWS Lambdas or Google Cloud Function access databases from other servers?

I have a webapp and database that aren't hosted on any cloud service, just on a regular hosting platform.
I need to build an API to read and write to that database and I want to use cloud functions to do so. Is it possible to connect to a remote databases from cloud functions (such as AWS Lambdas or Google cloud functions) even when they're not hosted that cloud service?
If so, can there be problems with doing so?
Cloud Functions are just Node.js code that runs in a managed environment. This means your code can do almost anything that Node.js scripts can do, as long as you stay within the restrictions of that environment.
I've seen people connect to many other database services, both within Google Cloud Platform and outside of it. The main restriction to be aware of there, is that you'll need to be on a paid plan in order to be able to call APIs that are not running on Google Cloud Platform.
Yes it's possible.
If so, can there be problems with doing so?
There could high latency if the database is in a different network. Also, long-lived database connection pools don't really work well in these environments due to the nature of the functions being created and destroyed constantly. Also, if your function reaches a high level of concurrency you may exhaust the number of available connections on your database server.
You could use FaaS the same as your web service hosted on any web server or cloud server.
You have to be careful with the duration of your call to DB because FasS functions are limited in time (15 min for AWS Lambda and 9 min on Google) and configure a firewall properly on your DB server.
A container of your lambda function could be reused, you could use some tricks with it - Best Practices for AWS Lambda Container Reuse
But you can't be sure that nothing changed during the work of your service.
You could read some good advice about it there - https://stackoverflow.com/a/37524237/182344
PS: Azure functions have always on setting, but I am not sure how pooling will work in this case.
Yes you can access on premise resources from Serverless products.
Please check this detailed tutorial where you can find 3 methods to achive your goal link:
Connecting using a VPN
Connecting using a Parner interconnect
Connecting using Interconnect solution

Which AWS services for mobile app backend?

I'm trying to figure out what AWS services I need for the mobile application I'm working on with my startup. The application we're working on should go into the app-/play-store later this year, so we need a "best-practice" solution for our case. It must be high scaleable so if there are thousands of requests to the server it should remain stable and fast. Also we maybe want to deploy a website on it.
Actually we are using Uberspace (link) servers with an Node.js application and MongoDB running on it. Everything works fine, but for the release version we want to go with AWS. What we need is something we can run Node.js / MongoDB (or something similar to MongoDB) on and something to store images like profile pictures that can be requested by the user.
I have already read some informations about AWS on their website but that didn't help a lot. There are so many services and we don't know which of these fit our needs perfectly.
A friend told me to just use AWS EC2 for the Node.js server + MongoDB and S3 to store images, but on some websites I have read that it is better to use this architecture:
We would be glad if there is someone who can share his/her knowledge with us!
To run code: you can use lambda, but be careful: the benefit you
don't have to worry about server, the downside is lambda sometimes
unreasonably slow. If you need it really fast then you need it on EC2
with auto-scaling. If you tune it up properly it works like a charm.
To store data: DynamoDB if you want it really fast (single digits
milliseconds regardless of load and DB size) and according to best
practices. It REQUIRES proper schema or will cost you a fortune,
otherwise use MongoDB on EC2.
If you need RDBMS then RDS (benefits:
scalability, availability, no headache with maintenance)
Cache: they have both Redis and memcached.
S3: to store static assets.
I do not suggest CloudFront, there are another CDN on market with better
price/possibilities.
API gateway: yes, if you have an API.
Depending on your app, you may need SQS.
Cognito is a good service if you want to authenticate your users at using google/fb/etc.
CloudWatch: if you're metric-addict then it's not for you, perhaps standalone EC2
will be better. But, for most people CloudWatch is abcolutely OK.
Create all necessary alarms (CPU overload etc).
You should use roles
to allow access to your S3/DB from lambda/AWS.
You should not use the root account but create a separate user instead.
Create billing alarm: you'll know if you're going to break budget.
Create lambda functions to backup your EBS volumes (and whatever else you may need to backup). There's no problem if backup starts a second later, so
Lambda is ok here.
Run Trusted Adviser now and then.
it'd be better for you to set it up using CloudFormation stack: you'll be able to deploy the same infrastructure with ease in another region if/when needed, also it's relatively easier to manage Infrastructure-as-a-code than when it built manually.
If you want a very high scalable application, you may be need to use a serverless architecture with AWS lambda.
There is a framework called serverless that helps you to manage and organize all your lambda function and put them behind AWS Gateway.
For the storage you can use AWS EC2 and install MongoDB or you can go with AWS DynamODB as your NoSql storage.
If you want a frontend, both web and mobile, you may be want to visit the react native approach.
I hope I've been helpful.