Parse Server resource limits - amazon-web-services

I am looking into migrating my parse.com app to parse server with either AWS or Heroku.
The primary frustration I encountered with Parse in the past has been the resource limits
https://parse.com/docs/cloudcode/guide#cloud-code-resource-limits
Am I correct in assuming that following a migration the resource limits will be dependant on the new host (i.e. AWS or Heroku)?

Yes. Parse Server is simply a nodejs module which means that wherever you choose to host your nodejs app will decide which resource limits that will be imposed. You might also be able to set them yourself.

I recently moved it to AWS , so yes as stated in a comment its just a nodejs module so you have complete control over it. So mainly constraints here will be cpu , i/o and network of AWS. I would suggest reading the documentation provided here https://github.com/ParsePlatform/parse-server , they have also mentioned which ec2 instances we should take so that we can scale node and mongo properly.

Related

can I write such a API code that runs on serverless (aws lambda) and the same code can runs on ec2?

I am looking for a language / framework or a method by which I can build API / web application code such that it can run on Serverless compute's like aws lambda and the same code runs on a dedicated compute system like lightsail or EC2.
First I thought of using Docker to do this but AWS Lambda entry point is a specific function signature which is very different than Spring Controllers. Is there a solution available currently?
So basically when I run it in lambda - it will have cold start issue, later when the app is ready or get popular I would like to move it to a EC2 instance for better performance and higher traffic load.
I want to start right in this situation so that later it can be easy to port and resolve the performance issue's
I'd say; no this is not possible easily.
When you are building an api that you'd want to run on lambda's you most likely will be using an API Gateway which takes care of your routing to different lambda functions (best practice). So the moment you would me working on an api like this migrating to EC2 would be a nightmare as you would need to rebuild the whole application a more of a monolith application which could run on EC2.
I would honestly commit to either run it on EC2/Containers or run it on Lambda, if cold start is your main issue with Lambda's you might wanna look into Lambda Snapstart for Java or use another language like Typescript/Python.
After some correct keywords in google I finally got what I was looking for, checkout this blog and code library shared by AWS which helps you convert the request and response of the request as per the framework required http request
Running APIs Written in Java on AWS Lambda: https://aws.amazon.com/blogs/opensource/java-apis-aws-lambda/
Repo Code: https://github.com/awslabs/aws-serverless-java-container
Thanks Ricardo for your response - will do check out Lambda Snapstart for sure and try it as well. I have not tested out this completely but it looks promising to some extent.

How to check whether my code runs in a container on AWS EC2 or not

My (python) code runs inside a docker container.
The container is deployed on AWS EC2 for our production and testing purposes, but sometimes on our local machines or other cloud vendors for development and CICD purposes.
For some functionality, I want my python code to be able to distinguish between an EC2 deployment and non-EC2. Is this possible?
I found this answer which uses the EC2 instance metadata endpoint, But I'm wondering:
a) Would this also work from within a docker container?
b) Isn't there a more elegant solution? Issuing an HTTP request and waiting for it seems a bit too much.
(I'm aware that a simple solution is probably to add some proprietary environment variable or flag, trying to find a more native to check this)
I recommend you to go with a custom environment variable. This way you will be able to easily reproduce the required behaviour outside of AWS (on your workstation or using other cloud provider).
Using curl or checking for presence of /etc/cloud would make your application behaviour dependent on third-party services/tools. Beside logic complexity (you'd have to handle possible curl errors, like invalid response codes) that can lead to bugs you surely don't want to meet.

Separate URL for each git branch in Cloud Run

I am looking into Cloud Run to host my new app, and I am wondering if it is possible to generate a separate URL for each git branch.
I use Netlify to host my other app. When it is connected to GitHub (or other VCS services), it builds the source code in a branch and deploy it to a URL that is specific to the branch.
Can it be done easily or do I have to write some logic?
Or do you think AWS amplify or some other services are of better fit?
The concept of Cloud Run and URLs is quite simple:
https://<service-name>-<project hash>.<region>.run.app
If your project and region are the same for all the branches, you simply have to deploy a different service for each branch to get a different URL.
That was for Cloud Run. Now, I'm not sure that Netlify is compliant with Cloud Run. I found no documentation on this.
This answer won't be directly useful to you but I think it's relevant and worth mentioning
The open source Knative API (and implementation actually exposes a "tag" feature while splitting the traffic between multiple revisions: https://github.com/knative/docs/blob/master/docs/serving/spec/knative-api-specification-1.0.md#traffictarget
This feature is not currently supported on Cloud Run fully managed, but it will be.
By tagging releases this way, you could define tag: v1 and tag: v2 in your traffic configuration, and you would get new URLs like:
https://v1-SERVICE_NAME...run.app
https://v2-SERVICE_NAME...run.app
that directly go to these specific versions.
And the interesting thing is, these revisions you specified in the traffic: block of the Service object do not have to receive any traffic (you can say traffic percentage: 0) but it would still create a domain name like I showed above to the inactive revisions of your app.
So when Cloud Run fully-managed supports tag fields, you can actually achieve this, although it will be less out-of-the-box experience than Netlify.

Knative: Run service on specific machine type

I'd occasionally like to run a processing-heavy job on demand. Knative serving seemed like a good fit since it can scale from/to zero pods.
Is it possible to specify a node pool or node selector for a Knative service? Or is there some other way to ensure pods are created with specific machine types? I couldn't find anything in the docs.
I'm using GKE.
EDIT: Open feature request (please upvote!) https://issuetracker.google.com/issues/114402172
It isn't currently possible to specify a node pool/selector while using knative, nor a way to ensure pods are created with a predefined machine type on knative.
I encourage you to file this as a feature request.
This has moved to this Knative issue, as the question doesn't seem to be GKE specific.

Using impdp/expdp with RDS Oracle on AWS

I'm very new to Amazon web services, especially using their RDS system. I have set up an Oracle database (11.2) and I now want to import a dump we made locally from our server using expdp. Apparently, the ability to use expdp/impdp on AWS is quite new. From what I understand, when creating an ORACLE database on RDS, a DATA_PUMP_DIR is automatically created. What is less obvious is how to access this directory and made our local dump available to RDS. I've tried to read the following information http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html on their website. But there is a lot of things I don't understand:
Why do I have to setup an EC2 instance when the dump file is actually on my local computer (and I can access remotely the RDS database using sqlplus or sql developper)
They are often using the 'sys' or 'system' user in their examples but, when reading the security settings for Oracle, it said that these users are made unavailable on RDS => you cannot connect to a database as Sysdba.
Could someone please point me to a simple and clear tutorial on how to use impdp on AWS ?
Thanks
It is possible to use Data Pump on RDS now.
duduklein's answer was correct when he wrote it. But the RDS docs now have details about using Oracle Data Pump. The doc page url is unmodified from the link as originally posted in the question (nice job, Amazon!) but it has new content on using Data Pump now.
It's not possible for now. I have just contacted amazon (through the premium support) for the same issue and they just told me that this is a feature request that was already passed to the RDS team, but there is no estimation of when this will be available.
The only way you can import files dumps is using the "exp" utility instead of the "expdp". In this case, you can use the "imp" utility to import data to RDS