OK Here is the scenario.
I have a Jenkins Slave in AWS and I've attached to it a Role that allows it to create EC2 resources. I found the role via the Packer github issue list. Here is the role
I have my Packer project attempting to build on the slave. When the build starts it fails with:
[1;31mBuild 'amazon-ebs' errored: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors[0m
==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
If I run aws configure and put in real credentials this obviously works, but I'm trying to avoid that. I have verified that the instance has the proper role attached. I have also verified that I can properly switch into this role via the command line.
What I seem to be missing is that with the role associated with the instance and packer specifying the role with: 'iam_instance_profile' why this continues to fail.
Any thoughts?
So after a lot of help from Castaglia I was able to get this to work. There seemed to be something wrong with the Role I had created. I deleted it and recreated it with the same name and same policy attached. After that it worked fine.
To note, I believe the Packer instructions have an error. They list the following as all that is needed for the role:
{
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:CreateVolume",
"ec2:DeleteVolume",
"ec2:CreateKeypair",
"ec2:DeleteKeypair",
"ec2:DescribeSubnets",
"ec2:CreateSecurityGroup",
"ec2:DeleteSecurityGroup",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateImage",
"ec2:CopyImage",
"ec2:RunInstances",
"ec2:TerminateInstances",
"ec2:StopInstances",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:DescribeInstances",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:DescribeSnapshots",
"ec2:DescribeImages",
"ec2:RegisterImage",
"ec2:CreateTags",
"ec2:ModifyImageAttribute"
],
"Resource" : "*"
}]
}
But I believe you need one more piece:
{
"Sid": "PackerIAMPassRole",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": [
"*"
]
}
Doing this allowed me to assume the role and build what I needed.
Related
I have a Glue job which triggers every time data lands in S3. I have initially used AWS managed AWSGlueServiceRole as policy for running my Glue job. But since this managed service role is not safe to use as it gives full accesses to services like S3 and ec2 which may be dangerous, I tried to modify few permissions and running into problems.
The main problem is with ec2 resources. For example see this part of the policy:
{
"Effect": "Allow",
"Action": [
"glue:*",
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:GetBucketAcl",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeRouteTables",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"iam:ListRolePolicies",
"iam:GetRole",
"iam:GetRolePolicy",
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
]
}
I have taken above from this page
I can restrict S3 access since I know the Buckets from where Glue job is reading from and writing to, but how can I restrict access to ec2 actions. Also what glue actions do I need to use as the policy above just gives full access to Glue as well ("glue:*")
When deploying a lambda function to a VPC you're required to grant a bunch of network interface related permissions to lambda's execution role. AWS manuals advice to use AWSLambdaVPCAccessExecutionRole managed policy for this, which looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignPrivateIpAddresses"
],
"Resource": "*"
}
]
}
As one can see, this policy doesn't restrict network interfaces that the lambda can modify, thus potentially allowing it to mess with networking outside its own VPC. I'd like to limit the actions that the lambda can do to the VPC or subnets that it's actually deployed into. However, so far I failed to come with a working policy for that.
I tried to check the VPC in the policy like this:
"Condition": {"StringEquals": {"ec2:Vpc": "${my_vpc_arn}" }}
but still got permission denied.
CloudTrail event contains the following authorization message) decoded with aws sts decode-authorization-message): https://pastebin.com/P9t3QWEY where I can't see any useful keys to check.
So is it possible to restrict a VPC-deployed lambda to only modify particular network interfaces?
You can't restrict the policy to individual NIs, as you don't know their ids until after you create them. But you should be able to restrict access to a specific VPC using the following lambda execution policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessToSpecificVPC",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:UnassignPrivateIpAddresses",
"ec2:AssignPrivateIpAddresses",
"ec2:DescribeNetworkInterfaces"
],
"Resource": "*",
"Condition": {
"ArnLikeIfExists": {
"ec2:Vpc": "arn:aws:ec2:<your-region>:<your-account-id>:vpc/<vpc-id>"
}
}
},
{
"Sid": "CWLogsPermissions",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
The Lambda Service needs to be able to create and remove network interfaces in your VPC. That's because a shared ENI will be deployed in the VPC. Once all execution contexts are terminated this shared ENI will be removed again. This also explains why the describe permissions are needed, because the service probably needs to figure out if a shared ENI is already deployed for the specific lambda function.
Unfortunately that means you can't restrict the delete/modify operations to any particular ENIs as those are created and removed dynamically.
According to the documentation the specific permissions the Role needs are:
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface
I checked the documentation and the Create + Delete actions allow (among others) the following conditions:
ec2:Subnet
ec2:Vpc
This means it should be possible. Maybe separating the ec2:* permissions into their own statement with the aforementioned conditions could help you.
Trying to create and run an AWS CodePipeline that pulls from Github, builds and deploys to an EC2 instance. The pipeline is as follows:
Source (Github) -> Build (AWS CodeBuild) -> Deploy (AWS CodeDeploy)
The source and build steps both succeed. However, deploy fails consistently giving the following error:
Insufficient permissions
Unable to access the artifact with Amazon S3 object key '[redacted]-2nd-test-pip/BuildArtif/IbiHzen' located in the Amazon S3 artifact bucket 'codepipeline-us-east-1-[redacted]'. The provided role does not have sufficient permissions.
Below is the IAM policy for the CodeBuild service role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": [
"arn:aws:logs:us-east-1:362490217134:log-group:/aws/codebuild/[Redacted]-Build-Project",
"arn:aws:logs:us-east-1:362490217134:log-group:/aws/codebuild/[Redacted]-Build-Project:*"
],
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-us-east-1-*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
},
{
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::codepipeline-us-east-1-[Redacted]/*"
],
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:GetObjectVersion"
]
}
]
}
The CodePipeline service role created by the pipeline wizard has assigned S3 full access:
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"ecs:*"
],
"Resource": "*",
"Effect": "Allow"
},
I have confirmed numerous times that the artifact referenced in the pipeline deploy step matches the artifact created by the build step.
If I go and look at path referenced, there is not a directory or zip file (not sure which SHOULD be there, but neither is) with that name. Additionally, a zip file is generated during the build, but it is never named what the deploy step expects.
I've also gone into the build project and attempted builds using other artifact configurations, but they seem to be ignored when running the build through CodePipeline.
Disclaimer: I've seen similar questions posted here and elsewhere on the interwebs, but each of them deal with ECS or another situation that differs from mine. Thank you for your help
The issue was unrelated to roles/policies. As mentioned, the expected zip file did not exist in the S3 bucket. This was due to an invalid artifact files path specified in the buildspec. Once corrected, the zip file is created and the deploy no longer fails on this error. Seems odd to me that CodePipeline would allow the build to report as completed successfully without validating that the files created as the artifact and passed to the deploy step were, in fact, created.
Using .Net Core, visual studio 2017 and AWS Toolkit for Visual 2017, I created a basic web api, the api works as designed.
However when it comes to publishing/deploying it, the first time works perfectly when the Stack doesnt exist, creates everything its suppose to. When I make a change and need to re-deployed/publish, it comes back with the following error.
Error creating CloudFormation change set: Stack [TestStack] already exists and cannot be created again with the changeSet [Lambda-Tools-636366731897711782].
Just above the error message is this
Found existing stack: False
Im wondering if there is something not quite right with it detecting if the Stack exists.
Im just wondering if Im missing something, or if this is actually be design, as for me to republish it I have to log into my AWS Console and go into the cloud formation section and delete the existing Stack.
Publish Dialog
Project Structure
After a bit of digging, and general trial and error. I believe this is actually to do with permissions of the user performing the publish. (The user in AWS)
I changed an inline policy to
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"cloudformation:*"
],
"Resource": [
"*"
]
}
]
}
Where cloudformation:* used to be several lines for individual permissions.
This now successfully publishes over an existing Stack, however visual studio doesnt like it and crashes. (Although the update does go through to AWS)
AWS' Serverless Application Model is … very new still. And for lack of any documentation about what IAM permission one needs to deploy an App with their CLI, I've worked out this policy that seems to work, and only grants the least needed permissions for the task.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"lambda:UpdateFunctionCode",
"s3:PutObject",
"cloudformation:DescribeStackEvents",
"cloudformation:UpdateStack",
"cloudformation:DescribeStackResource",
"cloudformation:CreateChangeSet",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:GetTemplateSummary",
"cloudformation:ListChangeSets",
"cloudformation:DescribeStacks"
],
"Resource": [
"arn:aws:lambda:*:123456789012:function:*-SAM-*",
"arn:aws:cloudformation:*:123456789012:stack/<STACK NAME OR GLOB>/*",
"arn:aws:cloudformation:<AWS_REGION>:aws:transform/Serverless-2016-10-31",
"arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-*/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "cloudformation:ValidateTemplate",
"Resource": "*"
}
]
}
Note:
Replace <STACK NAME OR GLOB> with something that best suits your needs, like:
* If you don't care which CloudFormation Stack this grants access to
*-SAM-* If you name your SAM CloudFormation apps with some consistency
Replace <AWS_REGION> with the region you're operating in.
The arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-* is the standard bucket naming that SAM CLI uses for creating a bucket for deploying CloudFormation Templates or Change Sets. You could alter this to explicitly be the name of the bucket SAM created for you.
I am trying to download AWS Codedeploy Agent file in my Amazon Linux. I followed instructions as mentioned in http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html, for Amazon Linux, have created appropriate instance profile, service role etc. Everything is latest (Amazon Linux, CLI Packages, it is a brand new instance and I have tried this with at least 3 more brand new instances with same result). All instances have full outbound internet access.
But following statement for downloading install from S3 always fails,
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
With Error,
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
Can anyone help me with this error?
I figured out the problem, According to Codedeploy documentation for IAM Instance profile
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html
following permissions needs to be given to your IAM instance profile.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
But I limited the resource to my code bucket since I don't want my instances to access other buckets directly. But turns out I also need to give additional permission for aws-codedeploy-us-east-1/* s3 resource for being able to download the agent. This is not very clear in the document for setting up IAM instance profile for Codedeploy.
More restrictive policy that works:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::aws-codedeploy-us-east-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-2/*",
"arn:aws:s3:::aws-codedeploy-ap-south-1/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-1/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-1/*",
"arn:aws:s3:::aws-codedeploy-eu-central-1/*",
"arn:aws:s3:::aws-codedeploy-eu-west-1/*",
"arn:aws:s3:::aws-codedeploy-sa-east-1/*"
]
}
]
}