I am trying to setup code deployment using aws, but when I try to perform deployment, I am getting this error:
2016-06-08 23:57:11 ERROR [codedeploy-agent(1207)]: InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Cannot reach InstanceService: Aws::CodeDeployCommand::Errors::AccessDeniedException -
2016-06-08 23:58:41 INFO [codedeploy-agent(1207)]: Version file found in /opt/codedeploy-agent/.version.
2016-06-08 23:58:41 INFO [codedeploy-agent(1207)]: [Aws::CodeDeployCommand::Client 400 0.055741 0 retries] poll_host_command(host_identifier:"IAM-user-ARN") Aws::CodeDeployCommand::Errors::AccessDeniedException
I have two IAM roles - one for EC2 instance, and one for deployment app.
S3 bucket have permission set for iam role which is used for deployment:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "XXXXXXXX:role/TestRole"
},
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": "arn:aws:s3:::pmcdeploy/*"
}
]
}
What is going on?
Is the error consistent? On looking at the agent code, it seems like the agent might having trouble talking to EC2. If this is a persistent problem, you can share the EC2 instance profile.
Also starting the agent with verbose option enabled gives a lot more information about what's going on.
Thanks
This is actually something related to the order of credential loading. The host agent is running with root user by default and also uses instance profile.
The exception is got when you've setup a root credential which has priority over instance profile according to: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#config-settings-and-precedence
Then the aws sdk used by host agent will use the credential configured for the root user instead of instance profile to configure the requests.
One of the workaround would be run the agent with a different user and don't configure any credential for that user.
We had what I think the same issue.
Our systems had a /root/.aws/credentials in place which CodeDeploy absolutely uses and I found no way of telling it to not do that.
Especially no documentation...
In the end, we rewrote everything on our end to ensure we'll no longer need a credentials file in place.
From that moment on, CodeDeploy used the instance profile and it was working fine.
I deleted /home/ubuntu/.aws and rebooted codedeploy agent service and it worked for me :-)
Related
I've currently writing a Terraform module for EC2 ASGs with ECS. Everything about starting the instances works, including IAM-dependent actions such as KMS-encrypted volumes. However, the ECS agent fails with:
Error getting ECS instance credentials from default chain: NoCredentialProviders: no valid providers in chain.
Unfortunately, most posts I find about this are about the CLI being run without credentials configured, however this should of course use the instance profile.
The posts I do find regarding that are about missing IAM policies for the instance role. However, I do have this trust relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ecs.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
(Added ECS because someone on SO had it in there, I don't think that's right. Also removed some conditions.)
It has these policies attached:
AmazonSSMManagedInstanceCore
AmazonEC2ContainerServiceforEC2Role
When I connect via SSH and try to run awscli commands, I get the error
Unable to locate credentials. You can configure credentials by running "aws configure".
running any command.
But with
curl http://169.254.169.254/latest/meta-data/iam/info
I see the correct instance profile ARN and with
curl http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance
I see temporary credentials. With these in the awscli configuration,
aws sts get-caller-identity
returns the correct results. There are no iptables rules or routes blocking access to the metadata service and I've deactivated IMDSv2 token requirement for the time being.
I'm using the latest stable version of ECS-optimized Amazon Linux 2.
What might be the issue here?
I am struggling trying to create my first React app. I have connected the app to the CodeCommit repository but the build on the Amplify console fails with this message:
2020-12-14T09:25:04.155Z [ERROR]: !!! Unable to assume specified IAM Role. Please ensure the selected IAM Role has sufficient permissions and the Trust Relationship is configured correctly.
The provision phase works perfectly:
I have created the service role AmplifyConsoleServiceRole-AmplifyRole as suggested on this guide and I am logged in as a user with AdministratorAccess authorization. Git commits to the repository from my PC console works perfectly.
It is not clear to me what IAM role the AWS Amplify Console is unable to assume. The AmplifyConsoleServiceRole-AmplifyRole which I have selected as Service role during the App creation I think. The permissions of this role are AdministratorAccess, as well. How can I check if the Trust Relationship is configured correctly?
I've contacted Amazon support. They answered that something is not working on their side using eu-south-1.
I've just tried on eu-central-1 and the build process worked as expected. So no there were no permissions problems but simply a bug. They told me that it will be addressed soon.
Edit: Amazon support team found the problem in the trust relationship to be used with the eu-south-1 region. It must be defined in the following way:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": ["amplify.eu-south-1.amazonaws.com","amplify.amazonaws.com"]
},
"Action": "sts:AssumeRole"
}
]
}
I have set up an EMR cluster with Zeppelin installed on it. I configured Zeppelin with Active Directory authentication and I have associated those AD users with IAM roles. I was hoping to restrict access to specific resources on S3 after logging into zeppelin using the AD credentials. However, it doesn't seem to be respecting the permissions the IAM role has defined. The EMR role has S3 access so I am wondering if that is overriding the permissions or that is actually the only role it cares about in this scenario
Does anyone have any idea?
I'm actually about to try to tackle this problem this week. I will try to post updates as I have some. I know that this is an old post, but I've found so many helpful things on this site that I figured it might help someone else even if doesn't help the original poster.
The question was if anyone has any ideas, and I do have an idea. So even though I'm not sure if it will work yet, I'm still posting my idea as a response to the question.
So far, what I've found isn't ideal for large organizations because it requires some per user modifications on the master node, but I haven't run into any blockers yet for a cluster at the scale that I need it to be. At least nothing that can't be fixed with a few configuration management tool scripts.
The idea is to:
Create a vanilla Amazon EMR cluster
Configure SSL
Configure authentication via Active Directory
(this step is what I am currently on) Configure Zeppelin to use impersonation (i.e. run the actual notebook processes as the authenticated user), which so far seems to require creating a local OS (Linux) user (with a username matching the AD username) for each user that will be authenticating to the Zeppelin UI. Employing one of the impersonation configurations can then cause Zeppelin run the notebooks as that OS user (there are a couple of different impersonation configurations possible).
Once impersonation is working, manually configure my own OS account's ~/.aws/credentials and ~/.aws/config files.
Write a Notebook that will test various access combinations based on different policies that will be temporarily attached to my account.
The idea is to have the Zeppelin notebook processes kick off as the OS user that is named the same as the AD authenticated user, and then have an ~/.aws/credentials and ~/.aws/config file in each users' home directory, hoping that that might cause the connection to S3 to follow the rules that are attached to the AWS account that is associated with the keys in each user's credentials file.
I'm crossing my fingers that this will work, because if it doesn't, my idea for how to potentially accomplish this will become significantly more complex. I'm planning on continuing to work on this problem tomorrow afternoon. I'll try to post an update when I have made some more progress.
One way to allow access to S3 by IAM user/role is to meet these 2 conditions:
Create S3 bucket policy matching S3 resources with IAM user/role. This should be done in S3/your bucket/Permissions/Bucket Policy.
Example:
{
"Version": "2012-10-17",
"Id": "Policy...843",
"Statement": [
{
"Sid": "Stmt...434",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<account-id>:user/your-s3-user",
"arn:aws:iam::<account-id>:role/your-s3-role"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::target-bucket/*",
"arn:aws:s3:::other-bucket/specific-resource"
]
}
]
}
Allow S3 actions for your IAM user/role. This should be done in IAM/Users/your user/Permissions/Add inline policy. Example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:HeadBucket",
"s3:ListObjects"
],
"Resource": "s3:*"
}
]
}
Please note this might be not the only and/or best way, but it worked for me.
I'm trying to add aws cloudwatch agent to see additional metrics using tutorial
A brief review of what I did:
Create AIM role and attach to EC2 instance doc (NOTE: I do not use Parameter Store just for communication between EC2 and cloudwatch)
Install Agent using s3 link
Create agent configuration file docs
Run agent using CLI dosc
But it still not working and in agent log, I see errors like
ec2tagger: Unable to initialize EC2 Instance Tags : +NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
While googling I found not much related to cloudwath just only that in AIM role in 'Trust Relationship' config ec2 should be mentioned in service section and it is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Any ideas, thanks!?
In my case the instance had an IAM role attached, but the role was missing the ec2:DescribeTags permission. Adding that fixed the problem.
"The first procedure creates the IAM role that you must attach to each Amazon EC2 instance that runs the CloudWatch agent. This role provides permissions for reading information from the instance and writing it to CloudWatch." in docs
please attach IAM role that you created to your ec2 instance first,it works for me
The cloudwatch agent process that runs in the ec2 should be able to describe the tags of ec2. The permission required for that is ec2:DescribeTags.
Attaching instance role with the managed policy arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy will resolve the problem.
Check to see if the CloudWatch Agent service is running (started)
I got the same issue, resolve by using below command, refresh routes
Import-Module C:\ProgramData\Amazon\EC2-Windows\Launch\Module\Ec2Launch.psm1; Add-Routes
Solved by running aws configure from inside the instance
When I try to deploy Java web app to Elastic Beanstalk Tomcat container it was failed with following error:
Service:AmazonCloudFormation, Message:TemplateURL must reference a valid S3 object to which you have access.
Please note the following points:
Deployment was automated via Jenkins running on EC2 server.
This error is not a continuous issue. Sometimes it was deployed successfully but sometimes it was failed with above error.
I had this exact problem, from what I could tell it was completely random but it turned out to be linked to IAM roles. Everything worked perfectly until I added .ebextensions with a database migration script, after that I couldn't get my Bamboo builder to work again. However I managed to figure it out (No thanks to Amazon's non-existing documentation on what permissions are needed for EB).
I based my IAM policy on this Gist: https://gist.github.com/magnetikonline/5034bdbb049181a96ac9
However I had to make some modifications. This specific issue was caused by a too restrictive policy on S3 gets, so I simply replaced the one provided with
{
"Action": [
"s3:Get*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::elasticbeanstalk-*/*"
]
},
This allows users with the policy to perform all kinds of Get operations on the bucket, since I couldn't be bothered to find out which specific one was required.
Uploading to beanstalk involves sending a zipped artifact into S3 along with modifying the cloudformation templates (this part is hands off).
Likely the IAM role that is attached to the jenkins runner (or access credentials) does not have access to the relevant S3 buckets. Ensure this via IAM. See: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.iam.html
This is an edge-case, but I wanted to capture it here for posterity. This error message can be returned as a generic error message sometimes. I spent many weeks working through this error with AWS to find out that it was related to Security Token Service (STS) credentials expiring. When you generate STS credentials the maximum duration of the session is 36 hours. If you generate a 36 hour key some services used by Elastic Beanstalk don't respect this session length and consider the session expired. To work around this we no longer allow STS credentials with a session length longer then 2 hours.
I have also struggled with this and, as in Rick's case, it turned out to be a permissions problem. But his solution hasn't worked for me.
I have fixed the
Service:AmazonCloudFormation, Message:TemplateURL must reference a valid S3 object to which you have access.
Adding "s3:Get*" alone wasn't enough, I needed also "s3:List*".
The interesting thing is that I was getting this issue just for one EB environments out of three. It turned out that the other environments did deploy to all nodes at once, while the problematic one had enabled Rolling updates (which, obviously, perform other actions, adding new instances etc.).
Here is the final IAM policy that works: gist: IAM policy to allow Continuous Integration user to deploy to AWS Elastic Beanstalk
I had the same issue. Based on what I gathered from AWS support, an IAM user requires full access to S3 to perform some actions like deployment. This is because EB uses CloudFormation which is using S3 to store templates. You need to attach the managed policy "AWSElasticBeanstalkFullAccess" to the IAM user performing deployment, or create a policy like the following and attach it to the user.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
Ideally amazon should have a way to restrict the Resource to specific buckets, but it doesn't look like that it is doable right now!