Continuous deploys on elastic beanstalk - amazon-web-services

I have everything setup and working with rolling deploys and being able to do git aws.push but how do I add a authorized key to EB server so my CI server can deploy as well?

Since you are using Shippable, I found this guide on Continuous Delivery using Shippable and Amazon Elastic Beanstalk that shows how to set it up on their end. Specifically, step 3 is what you are looking for.
It doesn't look like you need an authorized key, instead, you just need to give an AWS ID and AWS Secret Key that will allow Shippable to make API calls on your behalf. To do this, I recommend creating an IAM role that is specifically for Shippable. That way you can revoke it if you ever need to and only give it the permissions that it needs.

Related

Gitlab CI/CD deploy to aws via aws-azure-cli authentication

When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.
You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/

Private AWS credentials being shared with Serverless.com?

I've been having trouble with a deployment with a serverless-component, so I've been trying to debug it. Stepping through the code, I actually thought I'd be able to step into the component itself and see what was going on.
But to my surprise, I couldn't actually debug it, because the component doesn't actually exist on my computer. Apparently the serverless cli is sending a request to a server, and the request seems to include everything serverless needs to build and deploy the actual service— which includes my AWS credentials...
Is this a well-known thing? Is there a way to force serverless to build and deploy locally? This really caught me be surprise, and to be honest I'm not very happy about it.
I haven't used their platform, (I thought the CLI only executed from your local seems very risky), but you can make this more secure by the following:
First setup an iam role which can only do the deploy actions for your app. Then make a profile which assumes this role when you work on your serverless app and use the cli.
Secondly you can also avoid long-term cli credentials (iam users) by using the AWS SSO functionality which generates cli credentials for an hour, and with the AWS cli, you can login from the cli I believe. What this will mean is that your CLI credentials will live for at maximum 1 hour.
If the requests are always coming from the same IP you can also put that in an IAM policy but I wouldn't imagine there is any guarantee that their IP will always be the same.

Cross-account deployement in AWS through Code-deploy service

We have two AWS account say as Dev and Prod. In Dev account,our code build,code-pipelines and Code-deploy services is configured with S3. However, In Prod account an auto-scaling group is running for the production websites.
As per our requirement, We want to deploy the code from dev account to Prod account with cross-account deployment. Basically, The code-build and code-pipelines will execute the code and by using code-deployment it will deploy in the Prod account's Auto-scaling group.
Can someone give us some insight about to achieve the same.
Thanks
CodePipeline supports cross-account actions, however it's not currently configurable via the console and requires some extra roles to be configured.
Here's a guide on how to make it work: https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-create-cross-account.html
As of today, CodeDeploy doesn't support cross-account deployments. Depending on what your goal is, you might be able to achieve it another way.
I want to deploy a bundle in one account to another account
If your S3 bucket allows access to the second account, CodeDeploy doesn't care what account your bundle is in as long as everything can access it. Per #TimB, it looks like CodePipelines can support that behavior.
I need to initiate a deployment in one account to another
If you have a reason why the deployment must be in one account to another, you could set up the instances in the second account to be on-premise instances, though this is not a great solution.

How To Set Up An AWS CodeDeploy & EC2 CodeDeploy Secure Environment

While setting up the EC2 and AWS roles for deploying a website from CodeCommit using CodePipeline, there was little detail about the potential security concerns to take into account (following the various online tutorials, which were few and far between)
For the IAM roles for the EC2 Instance, and the AWS CodeDeploy, what is the bare minimum requirements for a secure and safe environment, to be able to deploy.
My environment is using this for development(inside a public subnet), and a live website(inside a private subnet, accessing via ELB). PHP coded sites.
My concern is somehow someone can inject their own PHP code through some unknown methods and take down the CodeCommit(source) or do other mischievous things.
Thanks!
To use CodeDeploy, IAM role for your EC2 instances should at least have a permission to pull your application artifact from the S3 bucket, and any other permission to AWS services that your website depends on.

How do I provide AWS credentials to Kubernetes?

I'm setting up a Kubernetes cluster on AWS and as part of the configuration for say the API Server, I provide the --cloud-provider=aws setting.
Once it starts up, however, I see in the logs that it complains about not having AWS credentials:
NoCredentialProviders: no valid providers in chain
After some searching, it seems that this issue was resolved for most people by using the "kube-up" script. However, for those who are not using the script to set up their cluster, how do we provide Kubernetes with AWS credentials?
It sounds like you don't have the appropriate IAM instance profile set on your master VM. The kube-up script for AWS creates a role and associated policy that is attached to the master VM when it is created. Having the IAM policy attached should give you the credentials necessary to make API calls into AWS.