Intermittently getting the following error while running aws glue job,
Error downloading script: fatal error: An error occurred (404) when calling the HeadObject operation:
Not sure why it would be intermittent, but this is likely an issue connecting to S3. A few things to check:
Glue Jobs run with an IAM role. You can check your Job details to see what it's currently set to. You should make sure that role has privileges to access the S3 bucket that has your job code in it.
Glue Jobs require a VPC endpoint. You should check to make sure that you have one properly created for the VPC you're using
It is possible to configure a VPC endpoint without associating it with any subnets. Check your VPC Endpoint for the correct routing.
Below is a bit of reference code written with AWS CDK, in case it's helpful
IAM Role
new iam.Role(this, `GlueJobRole`, {
assumedBy: new iam.ServicePrincipal(`glue.amazonaws.com`),
managedPolicies: [
iam.ManagedPolicy.fromAwsManagedPolicyName(
`service-role/AWSGlueServiceRole`
),
],
});
VPC Endpoint
const vpc = ec2.Vpc.fromLookup(this, `VPC`, { vpcId: VPC_ID });
new ec2.GatewayVpcEndpoint(this, `S3VpcEndpoint`, {
service: ec2.GatewayVpcEndpointAwsService.S3,
subnets: vpc.publicSubnets,
vpc,
});
May be your bucket is enabled customer managed key. You need to add Glue role to kms to fixed this issue.
Related
I'm trying to connect a spring boot application from AWS EKS to AWS Opensearch both of which reside in a VPC. Though the connection is successful im unable to write any data to the index.
All the AWS resources - EKS and Opensearch are configured using terraform. I have mentioned the elasticsearch subnet CIDR in the egress which is attached to the application. Also, the application correctly assumes the EKS service account and the pod role - which I mentioned in the services stanza for Elasticsearch. In the policy which is attached to the pod role, I see all the permissions mentioned - ESHttpPost, ESHttpget, ESHttpPut, etc.
This is the error I get,
{"error":{"root_cause": [{"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-
hellodemo-role-1,backend_roles=
[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-hellodemo
role-1], requested
Tenant=null]"}],"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld demo-eks-PodRle-
hellodemo-role-1,
backend_roles=[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-
PodRle-hellodemo role-1], requested Tenant=null]"},"status":403}
Is there anything that I'm missing out on while configuring?
This error can be resolved by assigning the pod role to additional_roles key in the Elasticsearch terraform. This internally is taken care by AWS STS when it receives a request from EKS.
(Solved)
I missed this mention on the aws user guide You can use the AmazonEC2FullAccess policy to give users complete access to work with Amazon EC2 Auto Scaling resources, launch templates, and other EC2 resources in their AWS account
Now I added permissions as same as on the AmazonEC2FullAccess policy on my custom policy, and the lambda is working well.
The AmazonEC2FullAccess has full permissions of CloudWatch, EC2, EC2 Auto Scaling, ELB, ELB v2, and limited IAM write permission.
#Marcin _ Thanks! your comment made me check this part.
I'm trying to update the ASG with 'updateAutoScalingGroup' API on lambda.
But this error "AccessDenied: You are not authorized to use launch template" is blocking me...
At the first time, I applied only related permissions on the IAM policy depend on the document, but now I allowed full permissions of EC2 and Autoscaling on the policy to solve this issue.
But no lucks.
On google, I saw some posts that saying this is just an error, or issue from AMI existence.
But my AMI for the launch template is in the same account, same region...
Could you give me some hint or reference to solve this?
Thanks
const AWS = require('aws-sdk')
exports.handler = (event) => {
const autoscaling = new AWS.AutoScaling()
const { asgName, templateName, version } = event
const params = {
AutoScalingGroupName: asgName,
LaunchTemplate: {
LaunchTemplateName: templateName,
Version: version
},
MaxSize: 4,
MinSize: 1,
DesiredCapacity: 1
}
autoscaling.updateAutoScalingGroup(params, async (err, data)=> {
if(err) console.log("err---", err)
else console.log("data---", data)
})
};
Below was added after the comments from Marcin, John Rotenstein, samtoddler
Now the policy has full permission for EC2, EC2 Auto Scaling, EC2 Image Builder, Auto Scaling, and some permissions on CloudWatch Logs. But no lucks yet.
The AMI is in the same account, same region. And I added the account number on the 'Modify Image Permissions' on it. (I don't know well on this but just tried.)
describeLaunchTemplates() shows the launchTemplate which I want to use.
CloudTrail shows 'RunInstances' and 'UpdateAutoScalingGroup' events. 'RunInstances' returned "errorCode": "Client.UnauthorizedOperation", and 'UpdateAutoScalingGroup' returned "errorCode": "AccessDenied", "errorMessage": "An unknown error occurred"
Without LaunchTemplate part, API is working well. (I tried update the min and max count only, and it succeed.)
Even I changed AMI as public, it's not working for this.
Now I'm trying to search about launch template and AMI related configuration..
Unfortunately, the errors provided by AWS in some cases are very unclear and could mislead.
Besides checking that you have the proper rights, this error is also returned when you are trying to create an autoscaling group with an invalid AMI or one that doesn't exist.
Actually, problem is your EC2 instance is having an IAM Role which you are not authorised to use it. Add below policy to lambda or whatever role or IAM user you using to pass the role that is attached to EC2 instance. Once that is done it will start working.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:PassRole"
],
"Resource": "arn:aws:iam::account-id:role/EC2-roles-for-XYZ-*"
}]
}
I'm trying to import existing vpc via pulumi.
const stackName = pulumi.getStack()
var vpcName = stackName + "-defaultvpc"
console.log("CIDR Block is" + config.cidrBlock)
const envVpc = new aws.ec2.Vpc(vpcName, {
cidrBlock: config.cidrBlock,
}, {import: config.vpcId});
module.exports = {
appVpc: envVpc
}
And then I'm executing pulumi up --stack test .
In my understanding this command just supposed to import the existing vpc into this test stack.
But during this execution, I'm getting the following error message.
error: Preview failed: refreshing urn:pulumi:test::identity::aws:ec2/vpc:Vpc::test-defaultvpc: UnauthorizedOperation: You are not authorized to perform this operation.
I've confirmed that I've all read permissions for the VPC in aws account. But unable to find out what is the exact permission pulumi requires for this operation.
This suggests you don't have authorization from AWS. From the command line where you're running pulumi, do you get the desired vpc in the results when running aws ec2 describe-vpcs?
If you do not, then you'll have to make sure that you have the DescribeVPC permission policy for that VPC.
I have set up the Codedeploy Agent, however when I run it, I get the error:
Error: HEALT_CONSTRAINTS
By going further , this is the entry in the code deploy log from the EC2 instance:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Cannot reach InstanceService: Aws::S3::Errors::AccessDenied - Access Denied
I have done a simple wget from the bucket and it results:
Connecting to s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)|xxxxxxxxx|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
On the opposite, if I use the AWS cli I can correctly reach the S3 bucket.
The EC2 instance is on a VPC, it has a role associated with full permission on S3, firewall settings inbound and outbound seem correct. So it is obviously something related to permissions in accessing from https.
The questions:
Under which credentials Code Deploy Agent runs ?
What permissions or roles have to be set on S3 bucket ?
The EC2 instance's credentials (the instance role) will be used when pulling from S3.
To be clear, the Service Role that CodeDeploy needs does not need S3 permissions. The ServiceRole CodeDeploy needs allows CodeDeploy to call AutoScaling & EC2 APIs to describe the instances so CodeDeploy knows how to deploy to them.
That being said, for your AccessDenied issue for S3, there are 2 things you need to check
The role that the EC2 instance(s) has s3:Get* and s3:List* (or more specific) permissions
The S3 bucket you want to deploy has a policy attached that allows the EC2 instance role to get the object.
Documentation for permissions: http://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-configure.html#instances-ec2-configure-2-verify-instance-profile-permissions
CodeDeploy uses "Service Roles" to access AWS resoures. In the AWS console for CodeDeploy, look for "Service role". Assign the IAM role that you created for CodeDeploy in your application settings.
If you have not created a IAM role for CodeDeploy, do so and then assign it to your CodeDeploy application.
The goal
I want to programmatically add an item to a table in my DynamoDB from my Elastic Beanstalk application, using code similar to:
Item item = new Item()
.withPrimaryKey(UserIdAttributeName, userId)
.withString(UserNameAttributeName, userName);
table.putItem(item);
The unexpected result
Logs show the following error message, with the [bold parts] being my edits:
User: arn:aws:sts::[iam id?]:assumed-role/aws-elasticbeanstalk-ec2-role/i-[some number] is not authorized to perform: dynamodb:PutItem on resource: arn:aws:dynamodb:us-west-2:[iam id?]:table/PiggyBanks (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: [the request id])
I am able to get the table just fine, but things go awry when PutItem is called.
The configuration
I created a new Elastic Beanstalk application. According to the documentation, this automatically assigns the application a new role, called:
aws-elasticbeanstalk-service-role
That same documentation indicates that I can add access to my database as follows:
Add permissions for additional services to the default service role in the IAM console.
So, I found the aws-elasticbeanstalk-service-role role and added to it the managed policy, AmazonDynamoDBFullAccess. This policy looks like the following, with additional actions removed for brevity:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:*",
[removed for brevity]
"lambda:DeleteFunction"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
This certainly looks like it should grant the access I need. And, indeed, the policy simulator verifies this. With the following parameters, the action is allowed:
Role: aws-elasticbeanstalk-service-role
Service: DynamoDB
Action: PutItem
Simulation Resource: [Pulled from the above log] arn:aws:dynamodb:us-west-2:[iam id?]:table/PiggyBanks
Update
In answer to the good question by filipebarretto, I instantiate the DynamoDB object as follows:
private static DynamoDB createDynamoDB() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setRegion(Region.getRegion(Regions.US_WEST_2));
DynamoDB result = new DynamoDB(client);
return result;
}
According to this documentation, this should be the way to go about it, because it is using the default credentials provider chain and, in turn, the instance profile credentials,
which exist within the instance metadata associated with the IAM role
for the EC2 instance.
[This option] in the default provider chain is available only when
running your application on an EC2 instance, but provides the greatest
ease of use and best security when working with EC2 instances.
Other things I tried
This related Stack Overflow question had an answer that indicated region might be the issue. I've tried tweaking the region with no additional success.
I have tried forcing the usage of the correct credentials using the following:
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new InstanceProfileCredentialsProvider());
I have also tried creating an entirely new environment from within Elastic Beanstalk.
In conclusion
By the error in the log, it certainly looks like my Elastic Beanstalk application is assuming the correct role.
And, by the results of the policy simulator, it looks like the role should have permission to do exactly what I want to do.
So...please help!
Thank you!
Update the aws-elasticbeanstalk-ec2-role role, instead of the aws-elasticbeanstalk-service-role.
This salient documentation contains the key:
When you create an environment, AWS Elastic Beanstalk prompts you to provide two AWS Identity and Access Management (IAM) roles, a service role and an instance profile. The service role is assumed by Elastic Beanstalk to use other AWS services on your behalf. The instance profile is applied to the instances in your environment and allows them to upload logs to Amazon S3 and perform other tasks that vary depending on the environment type and platform.
In other words, one of these roles (-service-role) is used by the Beanstalk service itself, while the other (-ec2-role) is applied to the actual instance.
It's the latter that pertains to any permissions you need from within your application code.
To load your credentials, try:
InstanceProfileCredentialsProvider mInstanceProfileCredentialsProvider = new InstanceProfileCredentialsProvider();
AWSCredentials credentials = mInstanceProfileCredentialsProvider.getCredentials();
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentials);
or
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain());