InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Missing credentials - amazon-web-services

I'm trying to deploy a GitHub project to a EC2 Instance using AWS CodeDeploy. After following 2 video tutorials an a bunch of Google answer, I'm still getting the following error:
2017-02-01 12:20:08 INFO [codedeploy-agent(1379)]: master 1379: Spawned child 1/1
2017-02-01 12:20:09 INFO [codedeploy-agent(1383)]: On Premises config file does not exist or not readable
2017-02-01 12:20:09 INFO [codedeploy-agent(1383)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandExecutor: Archives to retain is: 5}
2017-02-01 12:20:09 INFO [codedeploy-agent(1383)]: Version file found in /opt/codedeploy-agent/.version.
2017-02-01 12:20:09 ERROR [codedeploy-agent(1383)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Missing credentials - please check if this instance was started with an IAM instance profile
I have two IAM:
CodeDeployInstanceRole
CodeDeployServiceRole
CodeDeployInstanceRole for the EC2 Instance
Policy Name: AmazonEC2RoleforAWSCodeDeploy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:ListObjects"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Policy Name: AutoScalingNotificationAccessRole
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Resource": "*",
"Action": [
"sqs:SendMessage",
"sqs:GetQueueUrl",
"sns:Publish"
]
}
]
}
Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"codedeploy.amazonaws.com",
"ec2.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
CodeDeployServiceRole for CodeDeploy
Policy Name: AWSCodeDeployRole
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:CompleteLifecycleAction",
"autoscaling:DeleteLifecycleHook",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:PutLifecycleHook",
"autoscaling:RecordLifecycleActionHeartbeat",
"autoscaling:CreateAutoScalingGroup",
"autoscaling:UpdateAutoScalingGroup",
"autoscaling:EnableMetricsCollection",
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribePolicies",
"autoscaling:DescribeScheduledActions",
"autoscaling:DescribeNotificationConfigurations",
"autoscaling:DescribeLifecycleHooks",
"autoscaling:SuspendProcesses",
"autoscaling:ResumeProcesses",
"autoscaling:AttachLoadBalancers",
"autoscaling:PutScalingPolicy",
"autoscaling:PutScheduledUpdateGroupAction",
"autoscaling:PutNotificationConfiguration",
"autoscaling:PutLifecycleHook",
"autoscaling:DescribeScalingActivities",
"autoscaling:DeleteAutoScalingGroup",
"ec2:DescribeInstances",
"ec2:DescribeInstanceStatus",
"ec2:TerminateInstances",
"tag:GetTags",
"tag:GetResources",
"sns:Publish",
"cloudwatch:DescribeAlarms",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeInstanceHealth",
"elasticloadbalancing:RegisterInstancesWithLoadBalancer",
"elasticloadbalancing:DeregisterInstancesFromLoadBalancer"
],
"Resource": "*"
}
]
}
Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"codedeploy.amazonaws.com",
"ec2.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
EC2 Instance
I spin my own image that I have created based on Debian so I have NodeJS already installed. When I spin the new instance I also paste the following code in the User data text area to make sure CodeDeploy is installed.
#!/bin/bash -x
REGION=$(curl 169.254.169.254/latest/meta-data/placement/availability-zone/ | sed 's/[a-z]$//') &&
sudo apt-get update -y &&
sudo apt-get install -y python-pip &&
sudo apt-get install -y ruby &&
sudo apt-get install -y wget &&
cd /home/admin &&
wget https://aws-codedeploy-$REGION.s3.amazonaws.com/latest/install &&
chmod +x ./install &&
sudo ./install auto &&
sudo apt-get remove -y wget &&
sudo service codedeploy-agent start
Debugging
If I log in in the EC2 instance that I have create, and execute the following command:
echo $(curl http://169.254.169.254/latest/meta-data/iam/security-credentials/)
I get the following response CodeDeployInstanceRole
When I then execute
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/CodeDeployInstanceRole
I get the following response
{
"Code" : "Success",
"LastUpdated" : "2017-02-01T12:38:07Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "THE_KEY",
"SecretAccessKey" : "SECRET",
"Token" : "TOKEN",
"Expiration" : "2017-02-01T19:08:43Z"
}
On GitHub I see that CodeDeploy never accesses my repo even when I select deployment using GitHub, I set the right repo name, and commit ID.
Question
What am I missing?

I ran into the same issue. Briefly what caused the problem:
Launch an instance WITHOUT any roles attached to it
Then install a codedeploy-agent on that machine
Only lastly attach an IAM role to the machine
Result: I get the error: Missing credentials - please check if this instance was started with an IAM instance profile
Solution: restart the codedeploy agent. Use:
sudo service codedeploy-agent restart
The error should be gone now!

I was getting the "please check if this instance was started with an IAM instance profile". To check if your instance is launched without IAM profile go to AWS console -> your instance -> check in Description tab "IAM role" value, if it's empty then you have launched instance without IAM and here is what to do to solve the issue:
Go to IAM console -> Roles -> Create new role
Select AWS Service -> EC2 -> Next: Permissions(don't change anything) -> Next: Tags -> Next: Review -> Give the name and click Create role.
Go to AWS EC2 console -> select instance -> Actions -> Instance settings -> Attach/replace IAM role -> Select IAM role you just created
Restart codedeploy agent: sudo service codedeploy-agent restart
Try to deploy again and it should work

Turns out that by default Debian doesn't have curl installed. Installing curl before making the curl request to get the region the server is running on was the missing part in the Bash script.

The instance role permissions look good to me. But the IAM instance profile was added only at the first time when the instance was launched. Could you make sure the instances role had the right permissions before launching the instances?

This is what worked for me in 2021 on Ubuntu 16.04
Upgrade from Python 3.5.2 to 3.6
https://www.rosehosting.com/blog/how-to-install-python-3-6-on-ubuntu-16-04/
with sudo ...
cd /opt
wget https://www.python.org/ftp/python/3.6.3/Python-3.6.3.tgz
tar -xvf Python-3.6.3.tgz
cd Python-3.6.3
./configure
apt-get install zlib1g-dev
make
make install
Install latest version of aws cli v1
https://docs.aws.amazon.com/cli/latest/userguide/install-linux.html
cd ~
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
Modify Instance Metadata
https://docs.aws.amazon.com/cli/latest/reference/ec2/modify-instance-metadata-options.html
aws ec2 modify-instance-metadata-options \
--instance-id ${FOO_ID} \
--http-tokens optional \
--http-endpoint enabled
Install the CodeDeploy agent for Ubuntu Server
https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install-ubuntu.html
sudo apt-get update
sudo apt-get install ruby
sudo apt-get install wget
cd /home/ubuntu
wget https://aws-codedeploy-us-west-2.s3.us-west-2.amazonaws.com/latest/install
chmod +x ./install
sudo ./install auto
sudo service codedeploy-agent restart
sudo service codedeploy-agent status
To view deployment log files on Amazon Linux, RHEL, and Ubuntu Server instances
https://docs.aws.amazon.com/codedeploy/latest/userguide/deployments-view-logs.html
tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log
tail -f /opt/codedeploy-agent/deployment-root/deployment-logs/codedeploy-agent-deployments.log

Detach the profile from EC2 and then attach it back (Actions -> Security). Finally restart the agent with
sudo service codedeploy-agent restart
My case is slightly different from other answers. My profile looks correct and it has correct policy. And the EC2 is attached to the role - at least that what I see in AWS console.
The root cause is that the EC2 do not has a correct profile with it due to some regeneration of the same-name profile role. This can confirmed with curl http://169.254.169.254/latest/meta-data/iam/info
404 means something wrong.

Related

Amplify build fails with "Request failed 401 Unauthorized"

I am building an amplify react app and trying to connect it to my private npm packages in my CodeArtifact repository.
In the build file amplify.yml, I added
preBuild:
commands:
- aws codeartifact login --tool npm --repository myrepo --domain mydomain --namespace mynamespace --domain-owner myid
- yarn install
and gave the amplify service role the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codeartifact:GetAuthorizationToken",
"codeartifact:GetRepositoryEndpoint",
"codeartifact:ReadFromRepository"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "sts:GetServiceBearerToken",
"Resource": "*",
"Condition": {
"StringEquals": {
"sts:AWSServiceName": "codeartifact.amazonaws.com"
}
}
}
]
}
This setup works for CodeBuild building Lambda functions, but in Amplify, I get
Successfully configured npm to use AWS CodeArtifact repository
after the login command and
error An unexpected error occurred: "<some-package-url>: Request failed \"401 Unauthorized\"".
when installing dependencies.
I debugged the environment in amplify build and did not find any AWS access key id or secret, but also don't know why.
Ok I resolved my issue by deleting yarn.lock and adding it to .gitignore.
The problem was, that yarn caches the resolved package address in yarn.lock. That address was in my CodeArtifact repository, because I was logged in while installing dependencies on my dev machine. Since yarn.lock is not in .gitignore by default, I just pushed it into the build. When yarn installs dependencies in build, it uses the cached addresses, which can't be reached anymore.

EC2 instance policy (profile) not allowing pull image from ecr

I have attached the following policy to my EC2 instance
{
"Sid": "ECRPull",
"Effect": "Allow",
"Action": [
"ecr:Describe*",
"ecr:Get*",
"ecr:List*",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability"
],
"Resource": [
"arn:aws:ecr:eu-west-1:123456789:repository/my-repo"
]
}
However the following docker pull from within the instance fails, why?
docker pull 123456789.dkr.ecr.eu-west-1.amazonaws.com/my-repo:sometag
no basic auth credentials
docker command doesn't call AWS IAM service as it's not part of AWS. You've to use an aws command before issuing docker commands.
aws ecr get-login gives you a temporary token.
Use the token and username as AWS to do docker login
docker login –u AWS –p <token from previous command> –e none https://<aws_account_id>.dkr.ecr.us-east-1.amazonaws.com

Running docker in AWS ECS and passing env file

I need to run a docker container in AWS ECS. I do NOT have access to the source code for the image. This is a private image, from a private repo that I have uploaded to AWS ECR. I have created an AWS ECS Task Definition to run the container inside a service, inside a cluster. The image shows as being up and running but I cannot hit it via my browser. I know that all the network settings are correct because I can hit a simple hello world app that I also deployed to test.
There is also a command I need to run before: docker run --env-file <environment_variables_file> <image>:<tag> rake db:reset && rake db:seed.
According to the instructions for this docker image, the run command for it is: docker run -d --name <my_image_name> --env-file <environment_variables_file> -p 8080:80 <image>:<tag>.
I can run this image locally on my laptop with no issues, deploying it to AWS is that problem.
My question is how do I provide the environment_variables_file to the image? Where do I upload the file and how do I pass it? How do I run the command to init the DB before the image runs?
Since Nov 2020, ECS does support env files (blog post), but they must be hosted on S3:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html
Pasting the essentials for reference. Under container definition:
"environmentFiles": [
{
"value": "arn:aws:s3:::s3_bucket_name/envfile_object_name.env",
"type": "s3"
}
]
The task execution role also needs the following permission:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::s3_bucket_name/envfile_object_name.env"
]
},
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::s3_bucket_name"
]
}
]
}
Amazon ECS doesn't support environment variable files. You can set environment variables inside task definition. For example:
"environment" : [
{ "name" : "string", "value" : "string" },
{ "name" : "string", "value" : "string" }
]
Please read following instructions for more details.
Update:
AWS now provides a way -
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html

ERROR: 2.0+ Platforms require a service role. You can provide one with --service-role option

We recently upgraded the EB CLI tool to version to 3.6.2 (Python 2.7.6).
Now when we spin up a new eb environment...
eb create dev-env -p "64bit Amazon Linux 2015.09 v2.0.4 running Ruby 2.2 (Puma)" --single -i t2.micro --envvars SECRET_KEY_BASE=g5dh9cg61...
...we get this new error:
EB ERROR: 2.0+ Platforms require a service role. You can provide one with --service-role option
The EB CLI now requires you to specify a Service-Role.
If you dont already have one, create an 'aws-elasticbeanstalk-service-role' role here: https://console.aws.amazon.com/iam/home#roles
Select the 'Amazon EC2' Aws Service roles type;
Assign one or more Permissions;
Update the Trust Relationships, paste in ( for example ):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "elasticbeanstalk.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": "elasticbeanstalk"
}
}
}
]
}
Now when you spin up the new EB environment, include the --service-role option :
eb create dev-env -p "64bit Amazon Linux 2015.09 v2.0.4 running Ruby 2.2 (Puma)" --single -i t2.micro
--service-role aws-elasticbeanstalk-service-role --envvars SECRET_KEY_BASE=g5dh9cg614a37d4bd
For other people wondering, there is an easier option: You can just run eb create again with no parameters and eb cli will take you through the steps to create a new --service-role (if you don't have one already).
Note
In Windows, adding an .ebignore file causes the EB CLI to follow symbolic links and include the linked file when creating a source bundle. This is a known issue and will be fixed in a future update.
Reference: Config of EB Cli - Aws website

Access Denied s3cmd from an EC2 machine

I'm trying to use a log rotation configuration for my nginx server that I'm using as a reverse proxy machine located on an EC2 Ubuntu instance.
I want to store those logs on a S3 bucket after a rotation but I'm only getting "access denied, are you sure you keys have ListAllMyBuckets permissions errors" when I'm trying to configure s3cmd tools.
I'm pretty sure that my credentials is correctly configured at IAM, tried at least five different credentials (even the root cred) with the same result. It works fine to list all of my buckets from my local computer with aws cli tools with the same credentials so it puzzles me that I don't have any access just on my EC2 instance.
this is what I run:
which s3cmd
/usr/local/bin/s3cmd
s3cmd --configure --debug
Access Key: **************
Secret Key: *******************************
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
and this is the result
...
DEBUG: ConnMan.put(): connection put back to pool (http://s3.amazonaws.com#1)
DEBUG: S3Error: 403 (Forbidden)
DEBUG: HttpHeader: x-amz-id-2: nMI8DF+............
DEBUG: HttpHeader: server: AmazonS3
DEBUG: HttpHeader: transfer-encoding: chunked
DEBUG: HttpHeader: x-amz-request-id: 5912737605BB776C
DEBUG: HttpHeader: date: Wed, 23 Apr 2014 13:16:53 GMT
DEBUG: HttpHeader: content-type: application/xml
DEBUG: ErrorXML: Code: 'AccessDenied'
DEBUG: ErrorXML: Message: 'Access Denied'
DEBUG: ErrorXML: RequestId: '5912737605BB776C'
DEBUG: ErrorXML: HostId: 'nMI8DF+............
ERROR: Test failed: 403 (AccessDenied): Access Denied
ERROR: Are you sure your keys have ListAllMyBuckets permissions?
The only thing that is in front of my nginx server is a load balancer, but I can't see why it could interfere with my request.
Could it be something else that I've missed?
Please check That IAM user permission which keys you are using
Steps would be
AWS console go to IAM panel
IAM user > Select that User > in the bottom menu 2nd tab is
permission
attach a user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::YOU-Bucket-Name"
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::YOU-Bucket-Name/*"
}
]
}
Let me know how it goes
Please dont trust the --configure switch:
i was facing the same problem.
it was showing 403 in --configure but at the end i saved the Settings and then tried:
ERROR: Test failed: 403 (AccessDenied): Access Denied
Retry configuration? [Y/n] n
Save settings? [y/N] y
Configuration saved to '/root/.s3cfg'
# s3cmd put MyFile s3://MyBucket/
& it worked..
s3cmd creates a file called .s3cfg in your home directory when you set this up. I would make sure you put this file somewhere where your logrotate script can read this, and use the -c flag.
For example to upload the logfile.txt file to the logbucket bucket:
/usr/local/bin/s3cmd -c /home/ubuntu/.s3cfg put logfile.txt s3://logbucket
What is the version of s3cmd you are using?
I tried it using s3cmd 1.1, it seems s3cmd 1.1 does not work with IAM roles.
But someone says s3cmd 1.5 alpha2 has support for IAM roles.(http://t1983.file-systems-s3-s3tools.file-systemstalk.info/s3cmd-1-5-0-alpha2-iam-roles-supportincluded-t1983.html)
I have tried s3cmd 1.5 beta1(https://github.com/s3tools/s3cmd/archive/v1.5.0-beta1.tar.gz), it works fine with IAM roles.
So there are two ways to access s3 bucket of s3cmd:
Using access key and secret key
`
you need to set a config file in /root/.s3cfg(default path) as bellow
access_key=xxxxxxxx
secret_key=xxxxxxxxxxxxxxxxxxxx
Note that just set above two key-value in .s3cfg, no need other keys.
`
Using IAM add s3 policy with s3cmd > 1.5 alph2.
`
you need add a IAM to ec2 instance, this role may has a policy as bellow
{
"Effect": "Allow",
"Action": [
"s3:"
],
"Resource": ""
}
`
I found out a solution for my problems by deleting all installation of s3cmd. Then made sure that apt-get was up to date and installing it from apt-get again. After my configuration (the same as before) it worked out just fine!
I also had a similar problem. Even after associating my EC2 instance to an IAM role with s3 full access policy, my s3cmd was failing as there wasn't any .s3cfg file in it. I fixed by updating the version of my s3cmd.
sudo pip install s3cmd==1.6.1
Did the trick!