this might come off as a rather far fetched query but please help me out.
Is it possible to register an Azure Virtual Machine with AWS CodeDeploy?
I've done some read up and found that Amazon provides option to install the CodeDeploy agent on On-Premises instances as seen HERE
If it is at all possible, how do we go about it?
My objective is to try and use CodeDeploy as we are already subscribed to it instead of using Azure's DevOps services.
As suggested by #silent, I tried it out and it actually works. Here are the steps:
But before we start, MIND YOU!:
Step 1: Install AWS CLI
Step 2: Configure AWS with a user that has the role of the CodeDeploy IAM user
Step 3: Run the following command
aws deploy register
--instance-name <some name for your VM>
--iam-user-arn <IAM user arn>
--tags Key=<some key for you tag>,Value=<some value>
--region <your region>
Step 4: run aws deploy install --config-file codedeploy.onpremises.yml
Related
I am trying to implement CI/CD using AWS CodeBuild, and trying to deploy an application onto an AWS EC2 instance, but the code deployment is failing and showing the error below:
The IAM role arn:aws:iam::341502448925:role/CodeDeployServiceRole does not give you permission to perform operations in the following AWS service: AmazonEC2
I have even created service role in the IAM console but it's not working for me. Someone let me know how can I resolve this issue.
Except for creating an IAM role you should also install aws codedeploy agent on your ec2 instance:
install aws-codedeploy agent
Background:
I have jenkins installed in AWS Account #1 (account1234) and it has iam Role-jenkins attached to it. There's github configured with Jenkins.
When I click build job in Jenkins, jenkins pulls all the files from github and can be found in
/var/lib/jenkins/workspace/.
There's an application running in AWS Account #2 (acccount5678) in an ec2 instance (i-xyz123) and the project files are in /home/app/all_files/ ; This ec2 instance role has app-role attached to it.
What I'm trying to achieve:
When I click build, I want jenkins to push files from account 1234 to account 5678 by opening an SSM session from Jenkins ,to the ec2 instance on which app is running.
What I tried:
In the jenkins as part of build shell script I added:
aws ssm send-command --region us-east-1 --instance-ids i-xyz123 --document-name AWS-RunShellScript --comment IP config --parameters commands=ifconfig --output text
to test it. (If successful, I want to pass cp var/lib/jenkins/workspace/ /home/app/all_files/ as the command)
Error:
An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::account1234:assumed-role/Role-Jenkins/i-01234abcd is not authorized to perform: ssm:SendCommand on resource: arn:aws:ec2:us-east-1:account1234:instance/i-xyz123
Build step 'Execute shell' marked build as failure
Finished: FAILURE
Issue 1: instance/i-xyz123 is in account5678 but error above shows ssm trying to connect to instance in account1234 ( which shouldn't be happening)
Q1: How do I update my command so that it tries to open an ssm session
with instance/i-xyz123 present in account5678 to accomplish what I'm
trying to do.
I believe I would also need to make each role added as a trusted relationship to the other.
(Note I want to do it via sessions manager as I won't have to deal with credentials of any sort)
If I've understood correctly then you're right; to interact with the resources in account5678, there needs to be a trust relationship so that the Jenkins account can assume the relevant role in account5678 and call SSM from there.
Once you've configured the role relationship (ref: IAM cross account roles )
You should be able to achieve what you need by assuming the role first in your shell script and then running the ssm command. That way Jenkins will use the temp creds and execute the command in the correct account (5678).
This site steps through it pretty well :
Tom Gregory - Jenkins Assume Role
If you just cmd/ctrl f on that page ^ and search for 'shell' you should get to the section you need. Hope this somewhat helps.
Say I use aws-cli locally on my machine, I´d need to authenticate with credentials prior to any operation.
How do AWS services give permission to other services on my behalf? And more specifically, how does a container run aws-cli on my behalf without prior authentication?
I am asking this, after running my first pipeline successfully in codePipeline. My buildspec.yml does run aws s3 sync command flawlessly -which made me then wonder how do aws internally permissions work-.
AWS CodeBuild uses an IAM Service Role to provide AWS permissions to the CodeBuild environment. You should have had to create a service role for your CodeBuild configuration.
When the AWS cli tool runs, and it hasn't been previously configured with API access keys, it will check if it is running in an AWS environment like EC2 or Lambda and if so, it will use the AWS IAM role assigned to that runtime environment.
I am using the following command to create AWS Aurora Serverless instance
aws rds create-db-cluster --db-cluster-identifier test-cluster --database-name testdb --master-username test --master-user-password testtest --engine aurora --engine-mode serverless --region us-east-1
but I am getting the following error.
Unknown options: --engine-mode, serverless
Above command works great on my AWS account but its not working on my clients account. (I just have programmatic access to that account). I have double check the permissions and I have the similar permissions as of my own account.
Summary: AWS command to create serverless aurora cluster is working on one account but not on another account with similar permissions.
Account 1:
Account2:
The error message states that it does not know about the engine-mode argument. This is a clear indication that your AWS CLI version is out dated. Serverless was added as part of a recent (late 2018) release, so you need to update your client's AWS CLI to recognize these inputs.
I have figured it out. I was using awscli version 1.14 on my server and 1.16 on my laptop. I updated the awscli and now its working fine.
sudo pip install --upgrade awscli
For a month or so, I've been studying AWS services and now I have to accomplish some basic stuff on AWS elastic beanstalk via command line. As far as I understand there are the aws elasticbeanstalk [command] and the eb [command] CLI installed on the build instance.
When I run eb status inside application folder, I get response in the form:
Environment details for: app-name
Application name: app-name
Region: us-east-1
Deployed Version: app-version
Environment ID: env-name
Platform: 64bit Amazon Linux ........
Tier: WebServer-Standard
CNAME: app-name.elasticbeanstalk.com
Updated: 2016-07-14 .......
Status: Ready
Health: Green
That tells me eb init has been run for the application.
On the other hand if I run:
aws elasticbeanstalk describe-application-versions --application-name app-name --region us-east-1
I get the error:
Unable to locate credentials. You can configure credentials by running "aws configure".
In home folder of current user there is a .aws directory with a credential file containing a [profile] line and aws_access_key_id and
aws_secret_access_key lines all set up.
Beside the obvious problem with the credentials, what I really lack is understanding of the two cli. Why is EB cli not asking for credentials and AWS cli is? When do I use one or the other? Can I use only aws cli? Any clarification on the matter will be highly appreciated.
EDIT:
For anyone ending up here, having the same problem with "Unable to locate credentials". Adding --profile profile-name option solved the problem for me. profile-name can be found in ~/.aws/config (or credentials) file on [profile profile-name] line.
In order to verify that the AWS CLI is configured on your system run aws configure and provide it with all the details it requires. That should fix your credentials problem and checking the change in configuration will allow you to understand what's wrong with your current conf.
the eb cli and the aws cli have very similar capabilities, and I too am a bit confused as to why they both should exist. From my experience the main differences are that the cli is used to interact with your AWS account using simple requests while the eb cli creates connections between you and the eb envs and so allows for finer control over them.
For instance - I've just developed a CI/CD pipeline for our beanstalk apps. When I use the eb cli I can monitor the deployment of our apps and notify the developers when it's finished. aws cli does not offer that functionality, and the only to achieve that is to repeatedly query the service until you receive the desired result.
The AWS CLI is a general tool that works on all AWS resources. It's not tied to a specific software project, the type of machine you're on, the directory you're in, or anything like that. It only needs credentials, whether they've been put there manually if it's your own machine, or generated by AWS if it's an EC2 instance.
The EB CLI is a high level tool to wrangle your software project into place. It's tied to the directory you're in, it assumes that the stuff in your directory is your project, and it has short commands that do a lot of background work to magically put everything in the right place.