I am trying to build CI/CD using AWS CodePipeline.
I am integrating the Git lab with AWS S3.I am using this link -
https://aws.amazon.com/quickstart/architecture/git-to-s3-using-webhooks/
When the code is pushed into a specific branch, the AWS API is called. ( I can see in the CloudWatch logs). But I am getting below error -
Failed to authenticate SSH session: Waiting for USERAUTH response:
GitError
Do I need to configure the GITlab username/keys anywhere on AWS/S3/Cloudformation side?
I have configured GIT PULL URL ( GitPullWebHookApi) on the Gitlab Webhooks side.
I have configured, the PublicSSHKey from AWS S3 Cloudformation into the Secret Token in Gitlab.
Am I missing any step?
Is there any document which specifies the steps to configure the Gitlab keys/user credentials for this integration?
Add the SSH public key resource "PublicSSHKey" generated by the Cloudformation Stack in the Gitlab user public key settings. Please remember that the public needs to added to each user's account who need to invoke the pipeline when committing a change in the Git repository. The Outputs tab for the CloudFormation stack contain the two webhook endpoint URLs, the output bucket name, and the public SSH key [1].
[1] https://aws-quickstart.s3.amazonaws.com/quickstart-git2s3/doc/git-to-amazon-s3-using-webhooks.pdf
Related
If a Codebuild project runs on a custom image that has awscli preinstalled, but not configured for that AWS account, would it be still possible to run aws * in that project's buildspec without updating its AWS credentials there first?
In other words, are these credentials made available by Codebuild (e.g. via providing this information in automatically picked up environment variables) , or if I am using a custom image, it is up to me to take care of that explicitly, and aws * is only expected to work in buildspec out of the box without additional efforts on Codebuild managed images?
(I mean configuration/credentials for the account and role the Codebuild project in question operates in)
When you attach an IAM service role with your AWS Codebuild project, you don't need to configure AWS cli. IAM service role is part of environment configuration and this role will be assumed whenever you try to access resources in AWS. This goes same for your custom image for AWS Codebuild as well.
I am trying to setup HashiCorp Vault on AWS. There is a quickstart guide to install Vault along with Consul.
https://aws.amazon.com/quickstart/architecture/vault/
https://aws-quickstart.s3.amazonaws.com/quickstart-hashicorp-vault/doc/hashicorp-vault-on-the-aws-cloud.pdf
I followed all the steps of setting up Vault on AWS in a new VPC and in existing VPC but I was unable to complete the entire process successfully.
While trying to setup vault, the stack creation failed:
Parameters:
Events:
While trying to install Vault in existing VPC, it got installed but initialization failed as I have posted on this github issue: https://github.com/aws-quickstart/quickstart-hashicorp-vault/issues/42
I received a similar error when I was running through the same quick-start. My issue was that I didn't have an SSL certificate ARN added to the cloudformation template inputs.
Asking the community if it's possible to do the following. (had no luck in finding further information)
I create a ci/cd pipeline with Github/cloudbuild/Terraform. I have cloudbuild build terraform configuration upon github pull request and merge to new branch. However, I have cloudbuild service account (Default) use with least privilege.
Question adheres, I would like terraform to pull permission from an existing service account with least privilege to prevent any exploits, etc. once cloudbuild gets pull build triggers to init terraform configuration. At this time, i.e terraform will extract existing external SA to obtain permission to build TF.
I tried to use service account, and binding roles to that service account but error happens that
states service account already existences.
Next step, is for me to use a module but I think this is also going to create a new SA with replicated roles.
If this is confusing I do apologize, I will help in refining the question to be more concise.
You have 2 solutions:
Use the Cloud Build service account when you execute your Terraform. Your provider look like this:
provider "google" {
// Useless with Cloud Build
// credentials = file("${var.CREDENTIAL_FILE}}")
project = var.PROJECT_ID
region = "europe-west1"
}
But this solution implies to grant several roles to Cloud Build only for Terraform process. A custom role is a good choice for granting only what is required.
The second solution is to use a service account key file. Here again 2 solutions:
Cloud Build creates the service account, grant all the role on it, generates a key and passes it to terraform. After the terraform execution, the service account is deleted by Cloud Build. Good solution, but you have to grant Cloud Build service account the capability to grant itself any roles and to generate a json Key file. That's a lot a responsibility!
Use an existing service account and the key generated on it. But you have to secure the key and to rotate it regularly. I recommend you to securely store it in secret manager, but for the rotation, you have to manage it by yourselves, today. With this process, Cloud Build download the key (in secret manager) and pass it to terraform. Here again, the Cloud Build service account has the right to access to secrets, that is a critical privilege. The step in Cloud Build is something like this:
steps:
- name: gcr.io/cloud-builders/gcloud:latest
entrypoint: "bash"
args:
- "-c"
- |
gcloud beta secrets versions access --secret=test-secret latest > my-secret-file.txt
I have been invited to some project that has a repository stored in AWS CodeCommit. I received Access Key ID, Secret Key, region and repository url... I created an account in AWS (I didn't have one before) and created a new IAM user with AWSCodeCommitFullAccess privilege but I have no idea how to bind this user with a repository I was given. The console available at https://console.aws.amazon.com/codecommit/home points me to documentation or allows to create an empty repository and the access keys panel in IAM allows me only to create new Access Keys but not provide existing ones... How can I get to some existing repository then? Maybe the owner needs to do something as well?
Try ti Follow these steps:
To install and configure the AWS CLI:
On your local machine, download and install the AWS CLI. This is a
prerequisite for interacting with AWS CodeCommit from the command
line. ( install Latest Version Following this Guide )
Run this command to verify the AWS CodeCommit commands for the AWS
CLI are installed:
aws codecommit help
This command should return a list of AWS CodeCommit commands.
Configure the AWS CLI with the configure command, as follows aws configure
When prompted, specify the AWS access key and AWS secret access key of the IAM user you got from.
Also, be sure to specify the region where the repository exists, such as us-east-2. When prompted for the default output format, specify json. For example:
AWS Access Key ID [None]: Type your target AWS access key ID here, and then press Enter
AWS Secret Access Key [None]: Type your target AWS secret access key here, and then press Enter
Default region name [None]: Type a supported region for AWS CodeCommit here, and then press Enter
Default output format [None]: Type json here, and then press Enter`
Next Assuming you have Git Pre-installed on your machine Set Up the Credential Helper :
From the terminal, use Git to run git config, specifying the use of
the Git credential helper with the AWS credential profile, and
enabling the Git credential helper to send the path to repositories:
git config --global credential.helper '!aws codecommit credential-helper $#'
git config --global credential.UseHttpPath true
Now you can connect to your git they way you do normally, refer this AWS Documentation for more details.
It seems you want to contribute to a repository that already have existed in another account. To access the repository data by doing 'git clone', the provided "Access Key ID, Secret Key, region and repository url." should be sufficient. But you have to use the aws cli credential helper by following the instruction here: https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-https-unixes.html. There are other ways as well to access the repository, please take a look at the doc here: https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up.html.
If you want to check the code via AWS console, you can access the console by using this url: https://[account_id].signin.aws.amazon.com/console (replace the account_id with the account id where the repository belongs to). And you need to provide the username and console login password of the IAM user that have permission to read the codecommit repository.
In my Jenkinsfile, I am trying to push the image that I have built using the docker plugin like follows:
docker.withRegistry('https://<my-id>.dkr.ecr.us-east-1.amazonaws.com/', 'ecr:us-east-1:awscreds') {
docker.image('image').push('latest')
}
The pipeline fails every time with the message ERROR: Could not find credentials matching ecr:us-east-1:awscreds but I do have my AWS key ID and secret key in my Jenkins credentials with the ID "awscreds".
What could be a potential fix for this?
Alternatively, can I provide my credentials directly instead of mentioning the credential ID in the call?
I had the same error message. Make sure the Amazon ECR plugin is installed and up to date and that you reboot jenkins after the installation.