We want to save all our AWS accounts credentials in AWS parameter store for better security.
now the question is:
How can we use the credentials stored in AWS parameter store in GitLab for deployment?
In your project, you can configure .gitlab_ci.yaml to make many things, one of them is to deploy your application, and there are many ways, one of them is to:
Create a docker of your project
Push the image to ECR
Create a new ECS task definition with the new version of your docker image
Create a new ECS service with the new version of the task definition
and to do so, you need effectively the credentials of AWS that you have configured in your GitLab repository.
After that their many ways to deploy from GitLab to AWS, it depends on your company and what tools you are using.
Related
I have a .net core app in which I'm using services such as S3, RDS and Dynamo. Initially every instance of a client was initilaized using the Access_KEY and Secret_Access_Key direrectly, so basically these two were stored in a configuration file. Recently we've started a process to automate the AWS infrastructure creating using Terraform we are trying to migrate from manged container (Fargate and Amplify) to ECS, and we've also migrated from using plain secrets to using profiles.
In windows I've installed AWS CLI to configure a profile and under my
Users/{myUser/.aws
the following two files were creatd : config and credentials.
But how to configure a profile when using docker on linux I don't exactly know what are the steps that I should follow. When creating a CI-CD pipeline where after a commit and a successful build of a docker image, a new container should pop into existing replacing the old one. Should i configure the aws profile within the docker container running the app ? Should I generate a new set of Keys everytime a new container is build and replaces the old one ? The way this approach sounds, I don't belive this is the way to do it, but have no idea how to actually do it.
You shouldn't be using profiles when running inside AWS. Profiles are great for running the code locally, but when your code is deployed on ECS it should be utilizing a task IAM role.
You would manage that in Terraform by creating the IAM role, and then assigning the role to the task in the ECS task definition.
I have create a website using VS Code in NodeJS with typescript language.
Now I want to try to deploy it on AWS. I read so many things about EC2 , Cloud9 , Elastic Beanstalk, etc...
So I'm totally lost about what to use to deploy my website.
Honestly I'm a programmer, not a site manager or sysops.
Right Now I create an EC2 instances. One with a Key name and One with no key Name.
In the Elastic Beanstalk, I have a button Upload and Deploy.
Can someone send me the way to create my project as a valid package to upload and deploy it ?
I never deploy a website. (Normally it was the sysops at the job). So I don't know what to do to have a correct distributing package.
Does I need to create both EC2 and Beanstalk ?
Thanks
If you go with ElasticBeanstalk, it will take care of creating the EC2 instances for your.
It actually takes care of creating EC2 instance, DB, loadbalancers, CloudWatch trails and many more. This is pretty much what it does, bundles multiple AWS services and offers on panel of administration.
To get started with EB you should install the eb cli.
Then you should:
go to your directory and run eb init application-name. You'll start a wizard from eb cli asking you in which region you want to deploy, what kind of db and so on
after that your need to run eb create envname to create a new env for your newly create application.
at this point you should head to the EB aws panel and configure the start command for your app, it usually is something like this npm run prod
because you're using TS there are a few steps you need to do before being able to deploy. You should run npm run build, or whatever command you have for transpiling from TS to JS. You'll be deploying compiled scripts and not your source code.
now you are ready to deploy, you can run eb deploy, as this is your only env it should work, when you have multiple envs you can do eb deploy envname. For getting a list of all envs you can run eb list
There are quite a few steps to take care before deploying and any of them can cause multiple issues.
If your website contains only static pages you can use Amazon S3 to deploy your website.
You can put your build files in S3 bucket directly and enable static web hosting.
This will allow anyone to access your website from a url globally, for this you have to make your bucket public also.
Instead you can also use cloudfront here to keep your bucket private but allowing access to bucket through cloudfront url.
You can refer to below links for hosting website through s3.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/static-website-hosting.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/
As I'm following a multi-instance deployment strategy opposed to a multi-tenant, I'm deploying my entire infrastructure again for every new customer. This results in a lot of work as I have to
Deploy a new API instance on Elastic Beanstalk + env variables
Deploy a new webapp instance via s3
Deploy a new file storage via s3
Deploy a new backup file storage via s3
Setup a new data pipeline backing up the file storage to the backup bucket
Mapping the API and web app instance to a new customer-specific URL (e.g. mycustomer.api.mycompany.com and mycustomer.app.mycompany.com) via Route 53 + CloudFront
...
Is there a way to automate all of this deployment? I've looked into CodeDeploy by AWS but that doesn't seem to fit my needs.
The AWS tool that you can use to build infrastructure again and again is CloudFormation. We call this technique Infrastructure as a Code (IaaC). You can also use Terraform if you don't want to use AWS Specific tool.
You can use either YAML or JSON to define the template for your infrastructure.
And, you'll be using Git to do templates change management.
Watch this reinvent video to clear the whole picture.
I have created a Multiservices Spring/Python project. What's the easiest way to deploy it on the AWS cloud with 4 machines?
You can use multiple Services to achieve the same :
ElasticBeanstalk: If you have you code then you upload it on ElasticBeanstalk and any newer version just upload it on the Beanstalk and choose the deployment method it will automatically be deployed on the machine. You can choose the whatever number of instances you want to spin along with LoadBalancer and more.
Documentation here
CodePipeline: Have your code pushed into CodeCommit or Github or S3 and let it use CodeCommit, CodeBuild and CodeDeploy to deploy it on your EC2 server.
Documentation here
CloudFormation: This service you can use to spin up your services just through code. It is called Infrastructure as Code. Write code and spin up the instances.
Documentation here
We are using git-lab as our repo and decided to go with gitlab ci. we are using server-less framework to deploy our code on AWS. I want to integrate AWS profiles to Gitlab so that it can call the specific profile and enter into the AWS account specified. I have tried hard-coding the variables but if i have to enter using a different profile for the Deployment, i need to change all the gitlab-ci files as am having more than 100 repos.
Any way to configure the aws profiles in gitlab?
Basically my git-lab-CI jobs runs on Docker. so i created a docker image with all the needed prerequisites needed for my Deployment and now my runtime is same as my Local machine with AWS-CLI installed and i can use my AWS profiles for the deployment in the serverless files