Gitlab CI/CD deploy to aws via aws-azure-cli authentication - amazon-web-services

When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.

You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/

Related

Is it possible to store AWS account credentials via IAM roles?

I am currently trying to deploy some terraform code via gitlab CI/CD pipelines, and I was curious to know what is the safest way to authenticate Gitlab to deploy in the AWS Accounts. I am currently storing the access key and secrets keys in the env variables, but is it possible to do it via IAM Roles?

Authentication to GCP in terraform

We need to create gcp resources with terraform, but we are stuck at the terraform init stage while terraform tries to authenticate to gcp. We have already configured our backend and obtained our service account key but minifying (removing the extra lines in credential json file) the credential json and exporting to GOOGLE_CREDENTIALS, doesn't work. How are you setting this value?
If you are in a local and controlled environment you can use GOOGLE_APPLICATION_CREDENTIALS and set it with the path to the JSON key file. But as discussed key files are bad practices security wise. An alternative is to authenticate using gcloud auth application-default login and you dont have to deal with key files.
Another alternative is to use Google Cloud Shell which is already setup with the credentials of the authorised user opening the session.
Finally for automated pipeline you can use Google Cloud Build where processes will be run using the authentication and the authorisation of the service account used by Cloud Build

How to avoid using user profile to perform s3 operations without EC2 instances

According to many advices, we should not configure IAM USER but using IAM Role instead to avoid someone managed to grab the user confidential in .aws folder.
Lets say I don't have any EC2 instances. Can I still able to perform S3 operation via AWS CLI? Says aws s3 ls
MacBook-Air:~ user$ aws s3 ls
Unable to locate credentials. You can configure credentials by running "aws configure".
You are correct that, when running applications on Amazon EC2 instances or as AWS Lambda functions, an IAM role should be assigned that will provide credentials via the EC2 metadata service.
If you are not running on EC2/Lambda, then the normal practice is to use IAM User credentials that have been created specifically for your application, with least possible privilege assigned.
You should never store the IAM User credentials in an application -- there have been many cases of people accidentally saving such files into GitHub, and bad actors grab the credentials and have access to your account.
You could store the credentials in a configuration file (eg via aws configure) and keep that file outside your codebase. However, there are still risks associated with storing the credentials in a file.
A safer option is to provide the credentials via environment variables, since they can be defined through a login profile and will never be included in the application code.
I don't think you can use service roles on your personal machine.
You can however use multi-factor authentication for AWS CLI
You can use credentials on any machine not just EC2.
Follow the steps as described by the documentation for your OS.
http://docs.aws.amazon.com/cli/latest/userguide/installing.html

AWS role to grant access to use AWS resources by local spinnaker instance

I am trying to setup spinnaker locally to manage AWS EC2 instances. The current documentaion depicts the steps which need to have spinnaker instance to be running on EC2. They are creating one role and attaching it to spinnaker instance. As I am running spinnaker in my local environment, I am finding a way which will allow my local spinnaker instance to access the AWS resources. Will it be possible to have one such policy/role ? May be using AWS-STS ( Security Toke Service ), but i dont know how to use that creds with spinnaker instance
You can do this directly by creating an IAM user with required policies to access AWS Resources and use the Programmatic access Credentials in your local machine to use AWS CLI, API or SDKs.
For an existing IAM User, the step are as follows.
IAM User -> Security Credentials -> Create Access Keys
Note: If you cannot trust your local environment, then you can use AWS STS service (For this you need to implement a separate service, where you can pass user credentials and request for a temporal token from AWS STS)
You can create the IAM role for your local machine to assume, like
this example, or stricter,
spinnaker will handle the STS assume role given its configured properly
as for the temporary credential, if what you mean is MFA compatibility, I am myself still figuring out the way to do it. I think one workaround is to create a wrapper script that call sts:assumeRole, ask the user to provide the MFA token, then set AWS_ACCESS_KEY, AWS_SECRET_KEY, and AWS_SESSION_TOKEN which will be honored by clouddriver, but then deployment to multiple AWS accounts will be a problem

Continuous deploys on elastic beanstalk

I have everything setup and working with rolling deploys and being able to do git aws.push but how do I add a authorized key to EB server so my CI server can deploy as well?
Since you are using Shippable, I found this guide on Continuous Delivery using Shippable and Amazon Elastic Beanstalk that shows how to set it up on their end. Specifically, step 3 is what you are looking for.
It doesn't look like you need an authorized key, instead, you just need to give an AWS ID and AWS Secret Key that will allow Shippable to make API calls on your behalf. To do this, I recommend creating an IAM role that is specifically for Shippable. That way you can revoke it if you ever need to and only give it the permissions that it needs.