Strapi deployment on AWS Fargate (Serverless)-Aurora MySQL (Serverless) - amazon-web-services

I am trying to deploy the Strapi on AWS Fargate serverless environment through GitlabCI. Using AWS MySQL Aurora database for the DB integration. The database is up and running properly. When my CICD is trying to deploy it on Fargate, somehow it is not able to get connected with my DB. I am injecting env variables in Task Definition through Secret manager.
I am getting this error "getaddrinfo ENOTFOUND" in cloudwatch logs. Not sure what to do as everything is going on through CICD only. I can think of, Do we need to mention anything in database.js file or server.js file regarding my database or in Dockerfile? Or in GitlabCI configuration?
I know this is a very specific process but still if any one would be able to help me out.
Thanks,
Tushar

Related

Deploy webapp to AWS Elastic Beanstalk using Jenkins Pipeline

I've manually deployed my web application to AWS EBS. We use to normally have a jenkins pipeline which deployed the app to tomcat server running on AWS using mvn tomcat8:redeploy-only -Ddeploy.address=xx.xx.xx.xx:port
How do you deploy to AWS EBS with Jenkins, at the moment I'm having to upload the war file each time we have an update.
Any help is much appreciated.
Thanks
I haven't tried, but there is a Jenkins plugin for Elastic Beanstalk.
Alternatively, you could install elb cli in your Jenkins nodes to manage your environments.
I used AWS Beanstalk publisher jenkins plugin which allowed me to set up post-build actions which was the answer. You need to specify S3 bucket in the settings of where your app will be deployed to and set up version labelling. Thanks for kgiannakakis for referring this to me.
A good video I used can be found here: deploy war file to aws ebs

AWS how to handle programatic credentilas when building a docker container

I have a .net core app in which I'm using services such as S3, RDS and Dynamo. Initially every instance of a client was initilaized using the Access_KEY and Secret_Access_Key direrectly, so basically these two were stored in a configuration file. Recently we've started a process to automate the AWS infrastructure creating using Terraform we are trying to migrate from manged container (Fargate and Amplify) to ECS, and we've also migrated from using plain secrets to using profiles.
In windows I've installed AWS CLI to configure a profile and under my
Users/{myUser/.aws
the following two files were creatd : config and credentials.
But how to configure a profile when using docker on linux I don't exactly know what are the steps that I should follow. When creating a CI-CD pipeline where after a commit and a successful build of a docker image, a new container should pop into existing replacing the old one. Should i configure the aws profile within the docker container running the app ? Should I generate a new set of Keys everytime a new container is build and replaces the old one ? The way this approach sounds, I don't belive this is the way to do it, but have no idea how to actually do it.
You shouldn't be using profiles when running inside AWS. Profiles are great for running the code locally, but when your code is deployed on ECS it should be utilizing a task IAM role.
You would manage that in Terraform by creating the IAM role, and then assigning the role to the task in the ECS task definition.

What's the easiest way to deploy a a Multiservices Spring/Python project on the AWS?

I have created a Multiservices Spring/Python project. What's the easiest way to deploy it on the AWS cloud with 4 machines?
You can use multiple Services to achieve the same :
ElasticBeanstalk: If you have you code then you upload it on ElasticBeanstalk and any newer version just upload it on the Beanstalk and choose the deployment method it will automatically be deployed on the machine. You can choose the whatever number of instances you want to spin along with LoadBalancer and more.
Documentation here
CodePipeline: Have your code pushed into CodeCommit or Github or S3 and let it use CodeCommit, CodeBuild and CodeDeploy to deploy it on your EC2 server.
Documentation here
CloudFormation: This service you can use to spin up your services just through code. It is called Infrastructure as Code. Write code and spin up the instances.
Documentation here

AWS Fargate ECS CLI Compose Private Registry

I am trying to create a Fargate cluster using Cloud Formation in AWS which uses a bunch of images stored in a private registry behind username/password authentication.
This command
./ecs-cli.exe compose --project-name AdminUI service up --create-log-
groups --cluster-config AdminUIConfig
results in an error
FATA[0302] Deployment has not completed: Running count has not changed
for 5.00 minutes
After investigation it appears the problem is because of the lack of basic auth against the repo which holds the images. How on earth do I pass this? I am currently running on Windows 10 using VS Code, if that matters. It feels like it is not client side, it is the cluster itself which needs to be sending the authentication.
Sorry, new to Docker and AWS
Fargate currently only supports pulling images from an unauthenticated registry (like Docker Hub) or from Amazon ECR.
From the documentation:
The Fargate launch type only supports images in Amazon ECR or public repositories in Docker Hub.

Deploying failed in AWS Elasticbeanstalk

I have a problem in deploying PHP based web application in AWS EB. I tried to push files using GIT bash; it went fine and shows completed 100%.
Then I checked in my AWS console it shows "Environment update is starting." after few seconds it shows
"Service:AmazonCloudFormation, Message:Stack:arn:aws:cloudformation:us-east-1:556003586595:stack/awseb-e-m3tbtwpcpe-stack/0bb57070-5fac-11e2-af2e-5081b23f0c86 is in UPDATE_ROLLBACK_FAILED state and can not be updated."
"Failed to deploy application."
Please help someone how to resolve this issue. I am in urgency of setting the web application in AWS EB as soon as possible.
Thanks in advance,
Sankar.
Have you tried rebuilding the environment from the AWS Console? Are you doing anything less common like running your Beanstalk environment inside of a VPC?