I have an app created in Django and React, but there are few problems that I am facing :-
As I will have a large database(postgres) exclusily for a single user, I am creating different AWS EC2 instance for every user.
Whenever there is a new user then I have to go and manually install postgres, setup nginx and other important things and this is just for Django in EC2. Then I setup ReactJS frontend in Amazon S3.
How can i automate this so that when a paid user signs up, everything happens automatically.
The following should happen automatically:
- Automatically create a new EC2 instance, and deploy backend (This including installing several libraries, Postgres, Redis, async task queue (We use Huey which is like Celery), making migrations and other trivial stuff)
- Automatically make an S3 bucket and deploy frontend for the user. We will have to setup domains .etc for this.
Would like to know your inputs on how to approach this problem.
Related
Please understand that I am using a translator because I can't speak English.
I want to use Strapi to configure API Server, receive APIs using Nextjs, and create a website that expresses news articles.
News articles are registered daily using the Strapi management panel, and Nextjs should reflect them in real time.
Next.js will also include the ability for users to register, modify, and delete posts. (Board)
Also, in the Expo React Native App, data must be received from Strapi to the API and displayed.
Cloud services must use AWS.
In addition, after development in the local development environment, it must be reviewed by the customer at the test server and updated to the production environment after that.
Therefore, I think we need at least two management servers.
S3 and RDS should also be used.
It's not a very large site.
There are not many users yet.
If I need a docker, I will use it.
Frontend(Next.js)
Amplify
Docker -> App Runner
Docker -> Fargate
EC2
Backend(Strapi)
Amplify
Docker -> App Runner
Docker -> Fargate
EC2
Help me Please!
AWS announced 4 months ago that "you can now package and deploy Lambda functions as container images", see here for AWS announcement and sample code. I am trying to deploy my Django app in production using this service and setup CI/CD using GitHub. I've been able to figure out the CI/CD deploying a simple Python app using Lambda (no S3 or RDS). However, I don't know how to get Django, S3, Postgres, and Lambda to work together. I am new to Docker and followed this tutorial However, the tutorial does not talk about how to serve the static files using S3 and how to get Lambda, Postgres, and S3 all in the container, persumably because this is a fairly new service. I was wondering if anyone has successfully deployed a Django app for development purposes using these services and can share how the Dockerfile, docker-compose.yml, etc. should look like
I'm developing a Django application that is being deployed to an apache server.
I'm using two servers right now. In one server I have the Development and Staging instances for which I use a bash shell script to deploy, and in the other server the Production instance in which I use a mina-deploy script.
The problem comes after deploying since the permissions on the /var/www/... folder are not correct after the deployment, this wont allow apache to serve the website.
I was wondering if there is anyway I can deploy this code without making any change to permissions. For both I'm not using a root user but an user with SUDO permissions.
My stack is Wildfly, angular, spring, RDS, cloudfront. Frontend resources (html/js etc) are stored in app (ie delivered by Wildfly).
For backend and DB I can deploy with zero downtime with 2 EC2 behind ELB, but I am not sure how to handle this scenario:
User get old js/html from our server -> deployment of new version done -> user click on something which use old api (eg, the new version has a new mandatory param)
Is there a way to avoid this? I can only think of putting default value for the new param. Or would API versioning make sense here?
Another question: what if the frontend resources are delivered by cloudfront + s3? how to make the deployment of new resources to s3 in sync with backend?
I can only think of putting default value for the new param. Or would
API versioning make sense here?
This sounds like exactly what API versioning is intended to solve. You would change API versions anytime there is a change that would break clients of the previous version.
Another question: what if the frontend resources are delivered by
cloudfront + s3? how to make the deployment of new resources to s3 in
sync with backend?
Deploying them at the same time is up to you. That's part of your deployment process that you need to automate somehow. You can use versioning and order of deployment to help some here. For example, if your entire front-end is deployed on S3:
Deploy a new version of your API, under a new API version number
Deploy new static UI resources
Issue a CloudFront cache invalidation
Users start seeing new front-end resources that reference the new back-end API version
If your front-end UI is a mix of EC2 server dynamic resources and S3 static resources, and the EC2 UI components and the API are updated as part of the same deployment, then you can use a version prefix for your static resources on S3 to allow multiple versions to be available at once. For example:
Deploy new static UI resources to S3, with a new version prefix. This ensures that both the previous version(s) of the S3 resources and the new version are available at the same time.
Deploy the EC2 app, which updates both the EC2 UI components and the API
Users start loading the new version of the app from EC2, which references static resources under a new version prefix, which CloudFront then caches and serves
Obviously those are just a few scenarios and your situation probably differs in some way. In general you need to use versioning of any resources (static S3 resources, API resources, etc.) and a smart deployment order, to ensure that the end user doesn't see an interruption in service.
I have a rails app and a rails API app both setup in an OpsWorks stack. It seems by default it wants to deploy both rails apps to any rails layer instance. When I start an instance it will deploy both apps to it. I was looking to have them on separate instances. I understand I can just create another stack, but what about shared resources like a MongoDB or caching server.
You can override opsworks "deploy::default" with your own deploy recipe under layer settings and create layer-specific recipes that would deploy only the specified app to this layer not all of them.