Does many AWS OpsWorks apps = many stacks? What about shared resources - amazon-web-services

I have a rails app and a rails API app both setup in an OpsWorks stack. It seems by default it wants to deploy both rails apps to any rails layer instance. When I start an instance it will deploy both apps to it. I was looking to have them on separate instances. I understand I can just create another stack, but what about shared resources like a MongoDB or caching server.

You can override opsworks "deploy::default" with your own deploy recipe under layer settings and create layer-specific recipes that would deploy only the specified app to this layer not all of them.

Related

What is the best option to deploy Next.js and Strapi projects to AWS?

Please understand that I am using a translator because I can't speak English.
I want to use Strapi to configure API Server, receive APIs using Nextjs, and create a website that expresses news articles.
News articles are registered daily using the Strapi management panel, and Nextjs should reflect them in real time.
Next.js will also include the ability for users to register, modify, and delete posts. (Board)
Also, in the Expo React Native App, data must be received from Strapi to the API and displayed.
Cloud services must use AWS.
In addition, after development in the local development environment, it must be reviewed by the customer at the test server and updated to the production environment after that.
Therefore, I think we need at least two management servers.
S3 and RDS should also be used.
It's not a very large site.
There are not many users yet.
If I need a docker, I will use it.
Frontend(Next.js)
Amplify
Docker -> App Runner
Docker -> Fargate
EC2
Backend(Strapi)
Amplify
Docker -> App Runner
Docker -> Fargate
EC2
Help me Please!

How to autodeploy Django in EC2 and React in S3

I have an app created in Django and React, but there are few problems that I am facing :-
As I will have a large database(postgres) exclusily for a single user, I am creating different AWS EC2 instance for every user.
Whenever there is a new user then I have to go and manually install postgres, setup nginx and other important things and this is just for Django in EC2. Then I setup ReactJS frontend in Amazon S3.
How can i automate this so that when a paid user signs up, everything happens automatically.
The following should happen automatically:
- Automatically create a new EC2 instance, and deploy backend (This including installing several libraries, Postgres, Redis, async task queue (We use Huey which is like Celery), making migrations and other trivial stuff)
- Automatically make an S3 bucket and deploy frontend for the user. We will have to setup domains .etc for this.
Would like to know your inputs on how to approach this problem.

wso2 identity server: automate the deploy of policies, claims, and other configs

I have hundreds of different identity server configurations (policies, claims, service provider, etc)
And i need to repeat the same configuration on several environments: dev, test, prod
To do it by hands through export import in web console - it's a nightmare.
What is the best practice to do an automatic configuration deployment to wso2is?
I'm thinking about the following options:
create a script that will call admin services to import identity server configs
create custom deployer (like a synapse & dataservice deployers, etc) and call admin services or do in-memory api calls
find where and how it's stored in database and do sql script to fill database
Maybe there is something exists for config deployment and I can't find it?
You can create your own scripts or custom methods to manage the deployments. But you have to maintain those scripts by your self.
In that case, you can use deployment automation tools such as Puppet, Chef and etc..
You can use WSO2 Puppet modules to deploy your configuration in different environments.
just in case if somebody need the file-based deployer
created a groovy script deployer that could be used for different purposes.
service-provider deployer
policy deployer

AWS Beanstalk deploy across multiple AWS account

I'm in the process of setting up multiple AWS accounts. The plan is to create separate accounts for each environment - DEV, QA , UAT & PROD.
Our web application is hosted using elastic beanstalk. The CI/CD pipeline will tag and deploy a version to beanstalk application in DEV account for each commit - This is working great.
We are tying to figure out how to deploy a chosen tagged version to different AWS account (QA), we will have a beanstalk application created with same name in QA as well.
I'm looking for a better way to manage the releases, please share your thoughts.
You should be able to use Named Profiles to target different accounts. The syntax might look something like eb deploy --profile qa myapp-env-qa.

zero downtime deployment for frontend resources

My stack is Wildfly, angular, spring, RDS, cloudfront. Frontend resources (html/js etc) are stored in app (ie delivered by Wildfly).
For backend and DB I can deploy with zero downtime with 2 EC2 behind ELB, but I am not sure how to handle this scenario:
User get old js/html from our server -> deployment of new version done -> user click on something which use old api (eg, the new version has a new mandatory param)
Is there a way to avoid this? I can only think of putting default value for the new param. Or would API versioning make sense here?
Another question: what if the frontend resources are delivered by cloudfront + s3? how to make the deployment of new resources to s3 in sync with backend?
I can only think of putting default value for the new param. Or would
API versioning make sense here?
This sounds like exactly what API versioning is intended to solve. You would change API versions anytime there is a change that would break clients of the previous version.
Another question: what if the frontend resources are delivered by
cloudfront + s3? how to make the deployment of new resources to s3 in
sync with backend?
Deploying them at the same time is up to you. That's part of your deployment process that you need to automate somehow. You can use versioning and order of deployment to help some here. For example, if your entire front-end is deployed on S3:
Deploy a new version of your API, under a new API version number
Deploy new static UI resources
Issue a CloudFront cache invalidation
Users start seeing new front-end resources that reference the new back-end API version
If your front-end UI is a mix of EC2 server dynamic resources and S3 static resources, and the EC2 UI components and the API are updated as part of the same deployment, then you can use a version prefix for your static resources on S3 to allow multiple versions to be available at once. For example:
Deploy new static UI resources to S3, with a new version prefix. This ensures that both the previous version(s) of the S3 resources and the new version are available at the same time.
Deploy the EC2 app, which updates both the EC2 UI components and the API
Users start loading the new version of the app from EC2, which references static resources under a new version prefix, which CloudFront then caches and serves
Obviously those are just a few scenarios and your situation probably differs in some way. In general you need to use versioning of any resources (static S3 resources, API resources, etc.) and a smart deployment order, to ensure that the end user doesn't see an interruption in service.