I have a AWSCodeCommit repository where developers check-in the code. Now since, this is for a PoC, I don't want to create a CI-CD pipeline, instead I would like to copy the code from CodeCommit to my AWS EC2 instance. I would then run my code on EC2 instance to view the results. Does anyone know how to copy the code from CodeCommit to EC2 instance ?. I know using scp to copy code from my laptop to EC2, but since we collaborate on CodeCommit I think it would be nice to get the latest code from repository and then run it on the instance. Any help appreciated. Thanks
Thank you and Regards,
Santosh
Install git
Configure with AWS credentials
Do a git clone on the CodeCommit repository
This will provide a local copy of the code checked into the repository.
See: Setting Up for AWS CodeCommit - AWS CodeCommit
Related
I have existing project repo in gitlab. Since the gitlab is running in the server, we have the lfs objects in certain directory. My Doubt is AWS Codecommit does not have seperate server to store any lfs configurations as gitlab or bitbucket. I've to configure the lfs directory in AWS CodeCommit. My Question is "Does AWS CodeCommit supports Git LFS?". If yes,can someone explain how to configure AWS CodeCommit with Git LFS?
I have a requirement to do CI/CD using Bitbucket Pipelines.
We use Maven to build our code on Bitbucket pipelines and push the artifacts (jars) to AWS S3. The missing link is to figure out a way to get the artifacts from S3 and deploy to our EC2 instance.
It should all work from Bitbucket Pipelines yml - hopefully using Maven plugins.
For pushing the artifacts to S3 we use:
<groupId>com.gkatzioura.maven.cloud</groupId>
<artifactId>s3-storage-wagon</artifactId>
Is there a way/plugin that will download the artifact from S3's bucket and deploy it to EC2 instance specific folder and perhaps call a sh script to run the jars?
Thank you!
Use AWS Code Deploy (https://docs.aws.amazon.com/codedeploy/latest/userguide/welcome.html) to deploy it to the EC2 instance. The trigger for code deploy would be the S3 bucket that you deploy your jars to. You will need to turn S3 versioning on to make it work. Code Deploy has it's own set of hooks that you can use to perform any shell command or run any bat files on the EC2.
we're having issues connecting our BitBucket to CodePipeline.
I can setup the connection, but i don't get any repos listed.
Even if I type in the name, it cannot find it.
After manually typing i can save and run, but it fails with error: make sure repo exists.
If I setup the connection in CodeBuild it's working.
I tested in eu-central-1 and eu-west-1
Anybody with a similar issue?
Best regards,
Kai
I have EC2 instance, up and running good. Now I want a copy of the source code to be moved to AWS Code commit for further development and deployment.
Basically souce code should be moved from AWS EC2 to AWS Codecommit.
You need to either SSH (if Linux), or RDP (if windows), into the machine. Turn your source directories into a git repository, and then git push it to your remote (code commit) repository.
You basically do it the exact same way you would do it from any other machine - the fact that it is an EC2 instance really doesn't matter in this case.
First of all you have to create a git credential from AWS IAM. You need these credential whenever you pushed your code into codecommit. Alternatively, you can also upload your SSH public key into fAWS codecommit so that you don't need to enter your credentials every time whenever you push your code.
then follow the steps mentioned bellow:
Type git init while you are on your project folder in EC2
Then git add .
Then type git commit -m 'your custom commit message here'
Create a repo in CodeCommit from AWS Management Console.
Add then from your project folder, type remote origin by typing git remote add origin https://git-codecommit.us-east-1.amazonaws.com/v1/repos/testrepo (if you using HTTPS protocol) or git remote add origin ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/testrepo (if you are using SSH protocol)
Then push your code by typing git push origin master. in this stage, you have to enter your credentials such as AWS username and password or don't need them if you already uploaded your SSH public key into AWS code commit.
We are switching from Docker Hub to ECR and I'm curious how to structure the Dockerrun.aws.json file to use this image. I attempted to modify the name as <my_ECR_URL>/<repo_name>:<image_tag> but this is not successful. I also saw the details of private registries using an authentication file on S3 but this doesn't seem like the correct route when aws ecr get-login is the recommended way to authenticate with ECR.
Can anyone point me to how I can use an ECR image in a Beanstalk Dockerrun.aws.json file?
If I look at the ECS Task Definition,there's a required attribute called com.amazonaws.ecs.capability.ecr-auth, but I'm not setting that anywhere in my Dockerrun.aws.json file and I'm not sure what needs to be there. Perhaps it is an S3 bucket? Something is needed as every time I try to run the Elastic Beanstalk created tasks from ECS, I get:
Run tasks failed
Reasons : ATTRIBUTE
Any insights are greatly appreciated.
Update I see from some other threads that this used to occur with earlier versions of the ECS agent but I am currently running Agent version 1.6.0 and Docker version 1.7.1, which I believe are the recommended versions. Is this possibly an issue with the Docker version?
So it turns out, the ECS agent was only able to pull images with version 1.7, and that's where mine was falling. Updating the agent resolves my issue, and hopefully it helps someone else.
This is most likely an issue with IAM roles if you are using a role that was previously created for Elastic Beanstalk. Ensure that the role that Elastic Beanstalk is running with has the AmazonEC2ContainerRegistryReadOnly managed policy attached
Source: http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_IAM_policies.html
Support for ECR was added in version 1.7.0 of the ECS Agent.
When using Elasticbeanstalk and ECR you don't need to authenticate. Just make sure the user has the policy AmazonEC2ContainerRegistryReadOnly
You can store your custom Docker images in AWS with Amazon EC2 Container Registry (Amazon ECR). When you store your Docker images in
Amazon ECR, Elastic Beanstalk automatically authenticates to the
Amazon ECR registry with your environment's instance profile, so you
don't need to generate an authentication file and upload it to Amazon
Simple Storage Service (Amazon S3).
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account, or grant access to single repository by using the
following template to create a custom policy:
Source: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html