Install and initialise google-cloud-cli in AWS CodeBuild buildspec.yml - amazon-web-services

I am trying to push docker container image built in AWS CodeBuild project to GCP Artifact Registry. In order to push image from AWS managed Ubuntu CodeBuild env, I will need to install and initialise google-cloud-cli. However, to authenticate/activate the CLI using a service account, it requires a service-account-key.json file containing the service account credentials as mentioned here: https://cloud.google.com/container-registry/docs/advanced-authentication.
I would like to avoid having to setup a EFS just to pass a json file to the build server. What is the best way to authenticate google-cloud-cli using a service account without having to use a json file?

Related

How to create cloudformation template from SAm project?

I am trying to convert a SAM project to a cloudformation template in order to call
cloudformation.createStack()
to create multiple stacks when a lambda is invoked. So far I can upload the SAM project with
sam build
sam package
But the size of the S3 is to big and I am getting errors. What are the steps to correctly upload the cloudformation template?
These pre-reqs need to be met before continuing:
Install the SAM CLI.
Create an Amazon S3 bucket to store the serverless code artifacts that the SAM template generates. At a minimum, you will need permission to put objects into the bucket.
The permissions applied to your IAM identity must include iam:ListPolicies.
4.You must have AWS credentials configured either via the AWS CLI or in your shell's environment via the AWS_* environment variables.
5.Git installed.
6.Python 3.x installed.
(Optional) Install Python's virtualenvwrapper.
Source Link:- https://www.packetmischief.ca/2020/12/30/converting-from-aws-sam-to-cloudformation/

Deploy to AWS EC2 from AWS S3 via Bitbucket Pipelines

I have a requirement to do CI/CD using Bitbucket Pipelines.
We use Maven to build our code on Bitbucket pipelines and push the artifacts (jars) to AWS S3. The missing link is to figure out a way to get the artifacts from S3 and deploy to our EC2 instance.
It should all work from Bitbucket Pipelines yml - hopefully using Maven plugins.
For pushing the artifacts to S3 we use:
<groupId>com.gkatzioura.maven.cloud</groupId>
<artifactId>s3-storage-wagon</artifactId>
Is there a way/plugin that will download the artifact from S3's bucket and deploy it to EC2 instance specific folder and perhaps call a sh script to run the jars?
Thank you!
Use AWS Code Deploy (https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) to deploy it to the EC2 instance. The trigger for code deploy would be the S3 bucket that you deploy your jars to. You will need to turn S3 versioning on to make it work. Code Deploy has it's own set of hooks that you can use to perform any shell command or run any bat files on the EC2.

AWS ECS upload file to bucket from within container via bash

I have a ecs task running with aws fargate. I generate some files on the container and need to upload these files to an s3 bucket.
Can I do this by installing the aws cli to the container?
I'm not sure about the following stuff:
Do I need to use some rest api (like python boto3 library) or can I use the aws console?
How should I authenticate the requests (iam and aws secrets manager?)
Do I need to use some rest api (like python boto3 library) or can I
use the aws console?
Are you asking how to install the AWS CLI into the Docker container running in ECS? You would need to update your Docker image to include the AWS CLI and then redeploy the container to ECS. The AWS API, Boto3, or the AWS console are not going to help with that task.
How should I authenticate the requests (iam and aws secrets manager?)
By assigning an IAM role to the ECS task.

Can I use docker image registry from google cloud build?

With Google Cloud Build, I am creating a trigger to build using a Dockerfile, the end result of which is a docker image.
I'd like to tag and push this to the standard Docker image repository (docker.io), but i get the following error:
The push refers to repository [docker.io/xxx/yyy]
Pushing xxx/yyy:master
denied: requested access to the resource is denied
I assume that this is because within the context of the build workspace, there has been no login to the Docker registry.
Is there a way to do this, or do I have to use the Google Image Repository?
You can configure Google Cloud Build to push to a different repository with a cloudbuild.yaml in addition to the Dockerfile. You can log in to Docker by passing your password as an encrypted secret env variable. An example of using a secret env variable can be found here: https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials#example_build_request_using_an_encrypted_variable

Dockerrun.aws.json structure for ECR Repo

We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.
This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Support for ECR was added in version 1.7.0 of the ECS Agent.
When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html